2026-04-13 00:00:06.571363 | Job console starting 2026-04-13 00:00:06.608041 | Updating git repos 2026-04-13 00:00:07.067139 | Cloning repos into workspace 2026-04-13 00:00:07.361578 | Restoring repo states 2026-04-13 00:00:07.407177 | Merging changes 2026-04-13 00:00:07.407470 | Checking out repos 2026-04-13 00:00:08.031350 | Preparing playbooks 2026-04-13 00:00:09.280854 | Running Ansible setup 2026-04-13 00:00:17.658408 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-13 00:00:19.442589 | 2026-04-13 00:00:19.442701 | PLAY [Base pre] 2026-04-13 00:00:19.511682 | 2026-04-13 00:00:19.511944 | TASK [Setup log path fact] 2026-04-13 00:00:19.559356 | orchestrator | ok 2026-04-13 00:00:19.614373 | 2026-04-13 00:00:19.615763 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-13 00:00:19.729279 | orchestrator | ok 2026-04-13 00:00:19.774711 | 2026-04-13 00:00:19.774825 | TASK [emit-job-header : Print job information] 2026-04-13 00:00:19.876406 | # Job Information 2026-04-13 00:00:19.876576 | Ansible Version: 2.16.14 2026-04-13 00:00:19.876613 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-13 00:00:19.876648 | Pipeline: periodic-midnight 2026-04-13 00:00:19.876671 | Executor: 521e9411259a 2026-04-13 00:00:19.876693 | Triggered by: https://github.com/osism/testbed 2026-04-13 00:00:19.876715 | Event ID: bd0f35ccae72404487dde80aa1dbe86f 2026-04-13 00:00:19.904165 | 2026-04-13 00:00:19.904622 | LOOP [emit-job-header : Print node information] 2026-04-13 00:00:20.323225 | orchestrator | ok: 2026-04-13 00:00:20.323443 | orchestrator | # Node Information 2026-04-13 00:00:20.323482 | orchestrator | Inventory Hostname: orchestrator 2026-04-13 00:00:20.323509 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-13 00:00:20.323531 | orchestrator | Username: zuul-testbed04 2026-04-13 00:00:20.323553 | orchestrator | Distro: Debian 12.13 2026-04-13 00:00:20.323576 | orchestrator | Provider: static-testbed 2026-04-13 00:00:20.323597 | orchestrator | Region: 2026-04-13 00:00:20.323619 | orchestrator | Label: testbed-orchestrator 2026-04-13 00:00:20.323639 | orchestrator | Product Name: OpenStack Nova 2026-04-13 00:00:20.323658 | orchestrator | Interface IP: 81.163.193.140 2026-04-13 00:00:20.348543 | 2026-04-13 00:00:20.348651 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-13 00:00:21.585615 | orchestrator -> localhost | changed 2026-04-13 00:00:21.593545 | 2026-04-13 00:00:21.593655 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-13 00:00:24.954725 | orchestrator -> localhost | changed 2026-04-13 00:00:24.972358 | 2026-04-13 00:00:24.972473 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-13 00:00:25.975785 | orchestrator -> localhost | ok 2026-04-13 00:00:25.983314 | 2026-04-13 00:00:25.983431 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-13 00:00:26.052191 | orchestrator | ok 2026-04-13 00:00:26.087299 | orchestrator | included: /var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-13 00:00:26.116649 | 2026-04-13 00:00:26.116753 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-13 00:00:29.105527 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-13 00:00:29.105717 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/work/ff4365037a774f318c88cb05742d8e11_id_rsa 2026-04-13 00:00:29.105751 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/work/ff4365037a774f318c88cb05742d8e11_id_rsa.pub 2026-04-13 00:00:29.105774 | orchestrator -> localhost | The key fingerprint is: 2026-04-13 00:00:29.105797 | orchestrator -> localhost | SHA256:Rf/tQd4+zi8MBnPifg0YHIUuSss+eYo2WTf/EBJ6j6o zuul-build-sshkey 2026-04-13 00:00:29.105816 | orchestrator -> localhost | The key's randomart image is: 2026-04-13 00:00:29.105846 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-13 00:00:29.105865 | orchestrator -> localhost | | .o. | 2026-04-13 00:00:29.105883 | orchestrator -> localhost | | .o. | 2026-04-13 00:00:29.105901 | orchestrator -> localhost | | .o... . | 2026-04-13 00:00:29.105918 | orchestrator -> localhost | | ...oB ..o..| 2026-04-13 00:00:29.105934 | orchestrator -> localhost | | o.oSo.B .oo| 2026-04-13 00:00:29.105954 | orchestrator -> localhost | | =.o+o.+ o.| 2026-04-13 00:00:29.105971 | orchestrator -> localhost | | + o.+o. = .o| 2026-04-13 00:00:29.105988 | orchestrator -> localhost | | +.+.. o.. * .| 2026-04-13 00:00:29.106005 | orchestrator -> localhost | | .Eoo+ o. +o| 2026-04-13 00:00:29.106022 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-13 00:00:29.106065 | orchestrator -> localhost | ok: Runtime: 0:00:01.716981 2026-04-13 00:00:29.112691 | 2026-04-13 00:00:29.112818 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-13 00:00:29.145976 | orchestrator | ok 2026-04-13 00:00:29.186533 | orchestrator | included: /var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-13 00:00:29.212091 | 2026-04-13 00:00:29.212183 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-13 00:00:29.251586 | orchestrator | skipping: Conditional result was False 2026-04-13 00:00:29.274648 | 2026-04-13 00:00:29.274751 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-13 00:00:30.266581 | orchestrator | changed 2026-04-13 00:00:30.275805 | 2026-04-13 00:00:30.275894 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-13 00:00:30.601161 | orchestrator | ok 2026-04-13 00:00:30.610872 | 2026-04-13 00:00:30.610969 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-13 00:00:31.131537 | orchestrator | ok 2026-04-13 00:00:31.147332 | 2026-04-13 00:00:31.147454 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-13 00:00:31.623282 | orchestrator | ok 2026-04-13 00:00:31.628845 | 2026-04-13 00:00:31.628925 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-13 00:00:31.668336 | orchestrator | skipping: Conditional result was False 2026-04-13 00:00:31.677301 | 2026-04-13 00:00:31.677395 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-13 00:00:32.717774 | orchestrator -> localhost | changed 2026-04-13 00:00:32.729519 | 2026-04-13 00:00:32.729615 | TASK [add-build-sshkey : Add back temp key] 2026-04-13 00:00:33.760991 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/work/ff4365037a774f318c88cb05742d8e11_id_rsa (zuul-build-sshkey) 2026-04-13 00:00:33.761169 | orchestrator -> localhost | ok: Runtime: 0:00:00.013324 2026-04-13 00:00:33.766924 | 2026-04-13 00:00:33.767013 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-13 00:00:34.183236 | orchestrator | ok 2026-04-13 00:00:34.198376 | 2026-04-13 00:00:34.198487 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-13 00:00:34.264157 | orchestrator | skipping: Conditional result was False 2026-04-13 00:00:34.350002 | 2026-04-13 00:00:34.350100 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-13 00:00:34.829793 | orchestrator | ok 2026-04-13 00:00:34.848588 | 2026-04-13 00:00:34.848689 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-13 00:00:34.891888 | orchestrator | ok 2026-04-13 00:00:34.904333 | 2026-04-13 00:00:34.904434 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-13 00:00:35.608262 | orchestrator -> localhost | ok 2026-04-13 00:00:35.614301 | 2026-04-13 00:00:35.614381 | TASK [validate-host : Collect information about the host] 2026-04-13 00:00:37.049068 | orchestrator | ok 2026-04-13 00:00:37.089911 | 2026-04-13 00:00:37.090036 | TASK [validate-host : Sanitize hostname] 2026-04-13 00:00:37.239123 | orchestrator | ok 2026-04-13 00:00:37.251365 | 2026-04-13 00:00:37.251480 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-13 00:00:38.689255 | orchestrator -> localhost | changed 2026-04-13 00:00:38.694191 | 2026-04-13 00:00:38.694289 | TASK [validate-host : Collect information about zuul worker] 2026-04-13 00:00:39.349657 | orchestrator | ok 2026-04-13 00:00:39.354754 | 2026-04-13 00:00:39.354861 | TASK [validate-host : Write out all zuul information for each host] 2026-04-13 00:00:40.502402 | orchestrator -> localhost | changed 2026-04-13 00:00:40.510884 | 2026-04-13 00:00:40.510970 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-13 00:00:40.830689 | orchestrator | ok 2026-04-13 00:00:40.836397 | 2026-04-13 00:00:40.836478 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-13 00:02:01.565893 | orchestrator | changed: 2026-04-13 00:02:01.567424 | orchestrator | .d..t...... src/ 2026-04-13 00:02:01.567499 | orchestrator | .d..t...... src/github.com/ 2026-04-13 00:02:01.567527 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-13 00:02:01.567550 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-13 00:02:01.567572 | orchestrator | RedHat.yml 2026-04-13 00:02:01.582493 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-13 00:02:01.582511 | orchestrator | RedHat.yml 2026-04-13 00:02:01.582562 | orchestrator | = 1.53.0"... 2026-04-13 00:02:17.330291 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-13 00:02:17.346835 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-13 00:02:17.831084 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-13 00:02:18.622739 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-13 00:02:18.682524 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-13 00:02:19.208549 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-13 00:02:19.266738 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-13 00:02:19.828637 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-13 00:02:19.828698 | orchestrator | 2026-04-13 00:02:19.828705 | orchestrator | Providers are signed by their developers. 2026-04-13 00:02:19.828710 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-13 00:02:19.828715 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-13 00:02:19.828722 | orchestrator | 2026-04-13 00:02:19.828727 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-13 00:02:19.828731 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-13 00:02:19.828740 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-13 00:02:19.828745 | orchestrator | you run "tofu init" in the future. 2026-04-13 00:02:19.829000 | orchestrator | 2026-04-13 00:02:19.829013 | orchestrator | OpenTofu has been successfully initialized! 2026-04-13 00:02:19.829028 | orchestrator | 2026-04-13 00:02:19.829037 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-13 00:02:19.829041 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-13 00:02:19.829045 | orchestrator | should now work. 2026-04-13 00:02:19.829053 | orchestrator | 2026-04-13 00:02:19.829057 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-13 00:02:19.829061 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-13 00:02:19.829066 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-13 00:02:19.991310 | orchestrator | Created and switched to workspace "ci"! 2026-04-13 00:02:19.991357 | orchestrator | 2026-04-13 00:02:19.991363 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-13 00:02:19.991368 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-13 00:02:19.991389 | orchestrator | for this configuration. 2026-04-13 00:02:20.088102 | orchestrator | ci.auto.tfvars 2026-04-13 00:02:20.278355 | orchestrator | default_custom.tf 2026-04-13 00:02:22.532421 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-13 00:02:23.084573 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-13 00:02:23.460345 | orchestrator | 2026-04-13 00:02:23.460408 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-13 00:02:23.460416 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-13 00:02:23.460558 | orchestrator | + create 2026-04-13 00:02:23.460579 | orchestrator | <= read (data resources) 2026-04-13 00:02:23.460593 | orchestrator | 2026-04-13 00:02:23.460598 | orchestrator | OpenTofu will perform the following actions: 2026-04-13 00:02:23.460721 | orchestrator | 2026-04-13 00:02:23.460739 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-13 00:02:23.460744 | orchestrator | # (config refers to values not yet known) 2026-04-13 00:02:23.460748 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-13 00:02:23.460752 | orchestrator | + checksum = (known after apply) 2026-04-13 00:02:23.460757 | orchestrator | + created_at = (known after apply) 2026-04-13 00:02:23.460761 | orchestrator | + file = (known after apply) 2026-04-13 00:02:23.460765 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.460786 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.460790 | orchestrator | + min_disk_gb = (known after apply) 2026-04-13 00:02:23.460794 | orchestrator | + min_ram_mb = (known after apply) 2026-04-13 00:02:23.460798 | orchestrator | + most_recent = true 2026-04-13 00:02:23.460803 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.460807 | orchestrator | + protected = (known after apply) 2026-04-13 00:02:23.460811 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.460817 | orchestrator | + schema = (known after apply) 2026-04-13 00:02:23.460821 | orchestrator | + size_bytes = (known after apply) 2026-04-13 00:02:23.460825 | orchestrator | + tags = (known after apply) 2026-04-13 00:02:23.460829 | orchestrator | + updated_at = (known after apply) 2026-04-13 00:02:23.460833 | orchestrator | } 2026-04-13 00:02:23.460920 | orchestrator | 2026-04-13 00:02:23.460932 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-13 00:02:23.460937 | orchestrator | # (config refers to values not yet known) 2026-04-13 00:02:23.460941 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-13 00:02:23.460945 | orchestrator | + checksum = (known after apply) 2026-04-13 00:02:23.460949 | orchestrator | + created_at = (known after apply) 2026-04-13 00:02:23.460953 | orchestrator | + file = (known after apply) 2026-04-13 00:02:23.460957 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.460960 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.460964 | orchestrator | + min_disk_gb = (known after apply) 2026-04-13 00:02:23.460968 | orchestrator | + min_ram_mb = (known after apply) 2026-04-13 00:02:23.460972 | orchestrator | + most_recent = true 2026-04-13 00:02:23.460976 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.460979 | orchestrator | + protected = (known after apply) 2026-04-13 00:02:23.460983 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.460987 | orchestrator | + schema = (known after apply) 2026-04-13 00:02:23.460991 | orchestrator | + size_bytes = (known after apply) 2026-04-13 00:02:23.460995 | orchestrator | + tags = (known after apply) 2026-04-13 00:02:23.460999 | orchestrator | + updated_at = (known after apply) 2026-04-13 00:02:23.461003 | orchestrator | } 2026-04-13 00:02:23.461081 | orchestrator | 2026-04-13 00:02:23.461092 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-13 00:02:23.461097 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-13 00:02:23.461101 | orchestrator | + content = (known after apply) 2026-04-13 00:02:23.461105 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.461109 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.461113 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.461116 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.461120 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.461124 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.461128 | orchestrator | + directory_permission = "0777" 2026-04-13 00:02:23.461131 | orchestrator | + file_permission = "0644" 2026-04-13 00:02:23.461135 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-13 00:02:23.461139 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461143 | orchestrator | } 2026-04-13 00:02:23.461213 | orchestrator | 2026-04-13 00:02:23.461224 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-13 00:02:23.461229 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-13 00:02:23.461233 | orchestrator | + content = (known after apply) 2026-04-13 00:02:23.461236 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.461240 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.461244 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.461248 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.461251 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.461255 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.461259 | orchestrator | + directory_permission = "0777" 2026-04-13 00:02:23.461262 | orchestrator | + file_permission = "0644" 2026-04-13 00:02:23.461271 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-13 00:02:23.461275 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461278 | orchestrator | } 2026-04-13 00:02:23.461354 | orchestrator | 2026-04-13 00:02:23.461370 | orchestrator | # local_file.inventory will be created 2026-04-13 00:02:23.461374 | orchestrator | + resource "local_file" "inventory" { 2026-04-13 00:02:23.461378 | orchestrator | + content = (known after apply) 2026-04-13 00:02:23.461382 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.461386 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.461390 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.461393 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.461397 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.461401 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.461405 | orchestrator | + directory_permission = "0777" 2026-04-13 00:02:23.461409 | orchestrator | + file_permission = "0644" 2026-04-13 00:02:23.461412 | orchestrator | + filename = "inventory.ci" 2026-04-13 00:02:23.461416 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461420 | orchestrator | } 2026-04-13 00:02:23.461505 | orchestrator | 2026-04-13 00:02:23.461517 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-13 00:02:23.461522 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-13 00:02:23.461526 | orchestrator | + content = (sensitive value) 2026-04-13 00:02:23.461530 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.461534 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.461538 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.461542 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.461545 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.461549 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.461553 | orchestrator | + directory_permission = "0700" 2026-04-13 00:02:23.461557 | orchestrator | + file_permission = "0600" 2026-04-13 00:02:23.461560 | orchestrator | + filename = ".id_rsa.ci" 2026-04-13 00:02:23.461564 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461568 | orchestrator | } 2026-04-13 00:02:23.461588 | orchestrator | 2026-04-13 00:02:23.461599 | orchestrator | # null_resource.node_semaphore will be created 2026-04-13 00:02:23.461607 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-13 00:02:23.461611 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461615 | orchestrator | } 2026-04-13 00:02:23.461681 | orchestrator | 2026-04-13 00:02:23.461693 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-13 00:02:23.461697 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-13 00:02:23.461701 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.461705 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.461709 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461713 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.461716 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.461720 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-13 00:02:23.461724 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.461728 | orchestrator | + size = 80 2026-04-13 00:02:23.461731 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.461735 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.461739 | orchestrator | } 2026-04-13 00:02:23.461804 | orchestrator | 2026-04-13 00:02:23.461816 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-13 00:02:23.461821 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.461825 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.461828 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.461832 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461841 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.461845 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.461849 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-13 00:02:23.461852 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.461856 | orchestrator | + size = 80 2026-04-13 00:02:23.461860 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.461864 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.461868 | orchestrator | } 2026-04-13 00:02:23.461929 | orchestrator | 2026-04-13 00:02:23.461941 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-13 00:02:23.461945 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.461949 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.461953 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.461957 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.461961 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.461964 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.461968 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-13 00:02:23.461972 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.461976 | orchestrator | + size = 80 2026-04-13 00:02:23.461979 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.461983 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.461987 | orchestrator | } 2026-04-13 00:02:23.462104 | orchestrator | 2026-04-13 00:02:23.462124 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-13 00:02:23.462131 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.462138 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.462144 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.462150 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.462156 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.462162 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.462168 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-13 00:02:23.462174 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.462180 | orchestrator | + size = 80 2026-04-13 00:02:23.462187 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.462193 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.462200 | orchestrator | } 2026-04-13 00:02:23.462276 | orchestrator | 2026-04-13 00:02:23.462288 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-13 00:02:23.462293 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.462296 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.462300 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.462304 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.462308 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.462312 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.462321 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-13 00:02:23.462325 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.462329 | orchestrator | + size = 80 2026-04-13 00:02:23.462332 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.462336 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.462340 | orchestrator | } 2026-04-13 00:02:23.462403 | orchestrator | 2026-04-13 00:02:23.462415 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-13 00:02:23.462419 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.462423 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.462427 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.462431 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.462440 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.462444 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.462447 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-13 00:02:23.462465 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.462469 | orchestrator | + size = 80 2026-04-13 00:02:23.462472 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.462476 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.462480 | orchestrator | } 2026-04-13 00:02:23.462547 | orchestrator | 2026-04-13 00:02:23.462558 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-13 00:02:23.462562 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.462566 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.462570 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.462574 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.462577 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.462581 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.462585 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-13 00:02:23.462589 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.462593 | orchestrator | + size = 80 2026-04-13 00:02:23.462597 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.462600 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.462604 | orchestrator | } 2026-04-13 00:02:23.462666 | orchestrator | 2026-04-13 00:02:23.462677 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-13 00:02:23.462683 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.462687 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.462690 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.462694 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.462698 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.462702 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-13 00:02:23.462706 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.462710 | orchestrator | + size = 20 2026-04-13 00:02:23.462714 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.462718 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.462721 | orchestrator | } 2026-04-13 00:02:23.462789 | orchestrator | 2026-04-13 00:02:23.462805 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-13 00:02:23.462812 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.462818 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.462824 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.462828 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.462832 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.462835 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-13 00:02:23.462839 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.462843 | orchestrator | + size = 20 2026-04-13 00:02:23.462846 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.462850 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.462854 | orchestrator | } 2026-04-13 00:02:23.462919 | orchestrator | 2026-04-13 00:02:23.462931 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-13 00:02:23.462935 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.462939 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.462943 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.462946 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.462950 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.462954 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-13 00:02:23.462958 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.462966 | orchestrator | + size = 20 2026-04-13 00:02:23.462970 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.462974 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.462978 | orchestrator | } 2026-04-13 00:02:23.463036 | orchestrator | 2026-04-13 00:02:23.463047 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-13 00:02:23.463051 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.463055 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.463059 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.463062 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.463066 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.463070 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-13 00:02:23.463074 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.463078 | orchestrator | + size = 20 2026-04-13 00:02:23.463081 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.463085 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.463089 | orchestrator | } 2026-04-13 00:02:23.463168 | orchestrator | 2026-04-13 00:02:23.463180 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-13 00:02:23.463184 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.463188 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.463192 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.463196 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.463200 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.463203 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-13 00:02:23.463207 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.463214 | orchestrator | + size = 20 2026-04-13 00:02:23.463218 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.463222 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.463226 | orchestrator | } 2026-04-13 00:02:23.463290 | orchestrator | 2026-04-13 00:02:23.463301 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-13 00:02:23.463306 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.463310 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.463313 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.463317 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.463321 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.463325 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-13 00:02:23.463328 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.463332 | orchestrator | + size = 20 2026-04-13 00:02:23.463336 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.463340 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.463344 | orchestrator | } 2026-04-13 00:02:23.463417 | orchestrator | 2026-04-13 00:02:23.463430 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-13 00:02:23.463434 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.463438 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.463442 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.463445 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.463466 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.463472 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-13 00:02:23.463480 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.463484 | orchestrator | + size = 20 2026-04-13 00:02:23.463488 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.463492 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.463496 | orchestrator | } 2026-04-13 00:02:23.463560 | orchestrator | 2026-04-13 00:02:23.463571 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-13 00:02:23.463576 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.463587 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.463591 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.463595 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.463599 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.463602 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-13 00:02:23.463606 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.463610 | orchestrator | + size = 20 2026-04-13 00:02:23.463614 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.463618 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.463622 | orchestrator | } 2026-04-13 00:02:23.463683 | orchestrator | 2026-04-13 00:02:23.463695 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-13 00:02:23.463699 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.463703 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.463707 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.463711 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.463714 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.463718 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-13 00:02:23.463722 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.463726 | orchestrator | + size = 20 2026-04-13 00:02:23.463730 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.463733 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.463737 | orchestrator | } 2026-04-13 00:02:23.463949 | orchestrator | 2026-04-13 00:02:23.463964 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-13 00:02:23.463969 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-13 00:02:23.463973 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.463977 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.463980 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.463984 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.463988 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.463992 | orchestrator | + config_drive = true 2026-04-13 00:02:23.463996 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.463999 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.464003 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-13 00:02:23.464007 | orchestrator | + force_delete = false 2026-04-13 00:02:23.464011 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.464015 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.464018 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.464022 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.464026 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.464030 | orchestrator | + name = "testbed-manager" 2026-04-13 00:02:23.464034 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.464038 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.464041 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.464045 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.464049 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.464053 | orchestrator | + user_data = (sensitive value) 2026-04-13 00:02:23.464057 | orchestrator | 2026-04-13 00:02:23.464061 | orchestrator | + block_device { 2026-04-13 00:02:23.464065 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.464069 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.464075 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.464079 | orchestrator | + multiattach = false 2026-04-13 00:02:23.464083 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.464087 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.464095 | orchestrator | } 2026-04-13 00:02:23.464099 | orchestrator | 2026-04-13 00:02:23.464103 | orchestrator | + network { 2026-04-13 00:02:23.464107 | orchestrator | + access_network = false 2026-04-13 00:02:23.464110 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.464114 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.464118 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.464122 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.464126 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.464129 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.464133 | orchestrator | } 2026-04-13 00:02:23.464137 | orchestrator | } 2026-04-13 00:02:23.464323 | orchestrator | 2026-04-13 00:02:23.464335 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-13 00:02:23.464339 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.464343 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.464347 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.464351 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.464355 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.464358 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.464362 | orchestrator | + config_drive = true 2026-04-13 00:02:23.464366 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.464370 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.464374 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.464377 | orchestrator | + force_delete = false 2026-04-13 00:02:23.464381 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.464385 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.464389 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.464393 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.464396 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.464400 | orchestrator | + name = "testbed-node-0" 2026-04-13 00:02:23.464404 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.464408 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.464411 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.464415 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.464419 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.464423 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.464427 | orchestrator | 2026-04-13 00:02:23.464430 | orchestrator | + block_device { 2026-04-13 00:02:23.464434 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.464438 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.464442 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.464446 | orchestrator | + multiattach = false 2026-04-13 00:02:23.464487 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.464492 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.464496 | orchestrator | } 2026-04-13 00:02:23.464500 | orchestrator | 2026-04-13 00:02:23.464504 | orchestrator | + network { 2026-04-13 00:02:23.464508 | orchestrator | + access_network = false 2026-04-13 00:02:23.464512 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.464516 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.464520 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.464524 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.464528 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.464532 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.464536 | orchestrator | } 2026-04-13 00:02:23.464540 | orchestrator | } 2026-04-13 00:02:23.464728 | orchestrator | 2026-04-13 00:02:23.464741 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-13 00:02:23.464745 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.464749 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.464758 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.464762 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.464766 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.464769 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.464773 | orchestrator | + config_drive = true 2026-04-13 00:02:23.464777 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.464781 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.464785 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.464788 | orchestrator | + force_delete = false 2026-04-13 00:02:23.464792 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.464796 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.464800 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.464803 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.464807 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.464811 | orchestrator | + name = "testbed-node-1" 2026-04-13 00:02:23.464815 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.464819 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.464822 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.464826 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.464830 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.464834 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.464837 | orchestrator | 2026-04-13 00:02:23.464841 | orchestrator | + block_device { 2026-04-13 00:02:23.464845 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.464849 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.464853 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.464857 | orchestrator | + multiattach = false 2026-04-13 00:02:23.464860 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.464864 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.464868 | orchestrator | } 2026-04-13 00:02:23.464872 | orchestrator | 2026-04-13 00:02:23.464875 | orchestrator | + network { 2026-04-13 00:02:23.464879 | orchestrator | + access_network = false 2026-04-13 00:02:23.464883 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.464887 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.464891 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.464894 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.464898 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.464902 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.464906 | orchestrator | } 2026-04-13 00:02:23.464909 | orchestrator | } 2026-04-13 00:02:23.465096 | orchestrator | 2026-04-13 00:02:23.465107 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-13 00:02:23.465112 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.465116 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.465119 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.465124 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.465128 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.465135 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.465139 | orchestrator | + config_drive = true 2026-04-13 00:02:23.465143 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.465147 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.465151 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.465155 | orchestrator | + force_delete = false 2026-04-13 00:02:23.465159 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.465162 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.465166 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.465174 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.465178 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.465182 | orchestrator | + name = "testbed-node-2" 2026-04-13 00:02:23.465185 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.465189 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.465193 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.465197 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.465200 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.465204 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.465208 | orchestrator | 2026-04-13 00:02:23.465212 | orchestrator | + block_device { 2026-04-13 00:02:23.465216 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.465219 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.465223 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.465227 | orchestrator | + multiattach = false 2026-04-13 00:02:23.465231 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.465234 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.465238 | orchestrator | } 2026-04-13 00:02:23.465242 | orchestrator | 2026-04-13 00:02:23.465246 | orchestrator | + network { 2026-04-13 00:02:23.465250 | orchestrator | + access_network = false 2026-04-13 00:02:23.465254 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.465257 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.465261 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.465265 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.465269 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.465272 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.465276 | orchestrator | } 2026-04-13 00:02:23.465280 | orchestrator | } 2026-04-13 00:02:23.465469 | orchestrator | 2026-04-13 00:02:23.465482 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-13 00:02:23.465487 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.465490 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.465494 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.465498 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.465502 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.465506 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.465509 | orchestrator | + config_drive = true 2026-04-13 00:02:23.465513 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.465517 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.465521 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.465524 | orchestrator | + force_delete = false 2026-04-13 00:02:23.465528 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.465532 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.465536 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.465540 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.465543 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.465547 | orchestrator | + name = "testbed-node-3" 2026-04-13 00:02:23.465551 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.465555 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.465559 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.465562 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.465566 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.465570 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.465574 | orchestrator | 2026-04-13 00:02:23.465577 | orchestrator | + block_device { 2026-04-13 00:02:23.465584 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.465588 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.465592 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.465599 | orchestrator | + multiattach = false 2026-04-13 00:02:23.465603 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.465607 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.465611 | orchestrator | } 2026-04-13 00:02:23.465614 | orchestrator | 2026-04-13 00:02:23.465618 | orchestrator | + network { 2026-04-13 00:02:23.465622 | orchestrator | + access_network = false 2026-04-13 00:02:23.465626 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.465630 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.465634 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.465637 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.465641 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.465645 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.465649 | orchestrator | } 2026-04-13 00:02:23.465653 | orchestrator | } 2026-04-13 00:02:23.465829 | orchestrator | 2026-04-13 00:02:23.465840 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-13 00:02:23.465845 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.465849 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.465852 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.465856 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.465860 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.465864 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.465868 | orchestrator | + config_drive = true 2026-04-13 00:02:23.465871 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.465875 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.465879 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.465883 | orchestrator | + force_delete = false 2026-04-13 00:02:23.465886 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.465890 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.465894 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.465898 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.465901 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.465905 | orchestrator | + name = "testbed-node-4" 2026-04-13 00:02:23.465909 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.465913 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.465916 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.465920 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.465924 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.465928 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.465932 | orchestrator | 2026-04-13 00:02:23.465935 | orchestrator | + block_device { 2026-04-13 00:02:23.465939 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.465943 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.465947 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.465950 | orchestrator | + multiattach = false 2026-04-13 00:02:23.465954 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.465958 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.465962 | orchestrator | } 2026-04-13 00:02:23.465966 | orchestrator | 2026-04-13 00:02:23.465969 | orchestrator | + network { 2026-04-13 00:02:23.465973 | orchestrator | + access_network = false 2026-04-13 00:02:23.465977 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.465981 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.465984 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.465988 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.465992 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.465996 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.466000 | orchestrator | } 2026-04-13 00:02:23.466003 | orchestrator | } 2026-04-13 00:02:23.466243 | orchestrator | 2026-04-13 00:02:23.466257 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-13 00:02:23.466262 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.466265 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.466269 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.466273 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.466277 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.466281 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.466284 | orchestrator | + config_drive = true 2026-04-13 00:02:23.466288 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.466292 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.466296 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.466300 | orchestrator | + force_delete = false 2026-04-13 00:02:23.466306 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.466310 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466314 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.466318 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.466321 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.466325 | orchestrator | + name = "testbed-node-5" 2026-04-13 00:02:23.466329 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.466333 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466336 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.466340 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.466344 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.466348 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.466352 | orchestrator | 2026-04-13 00:02:23.466356 | orchestrator | + block_device { 2026-04-13 00:02:23.466359 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.466363 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.466367 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.466371 | orchestrator | + multiattach = false 2026-04-13 00:02:23.466374 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.466378 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.466382 | orchestrator | } 2026-04-13 00:02:23.466386 | orchestrator | 2026-04-13 00:02:23.466389 | orchestrator | + network { 2026-04-13 00:02:23.466393 | orchestrator | + access_network = false 2026-04-13 00:02:23.466397 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.466401 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.466404 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.466408 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.466412 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.466416 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.466420 | orchestrator | } 2026-04-13 00:02:23.466423 | orchestrator | } 2026-04-13 00:02:23.466481 | orchestrator | 2026-04-13 00:02:23.466493 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-13 00:02:23.466497 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-13 00:02:23.466501 | orchestrator | + fingerprint = (known after apply) 2026-04-13 00:02:23.466505 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466509 | orchestrator | + name = "testbed" 2026-04-13 00:02:23.466513 | orchestrator | + private_key = (sensitive value) 2026-04-13 00:02:23.466516 | orchestrator | + public_key = (known after apply) 2026-04-13 00:02:23.466520 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466524 | orchestrator | + user_id = (known after apply) 2026-04-13 00:02:23.466528 | orchestrator | } 2026-04-13 00:02:23.466567 | orchestrator | 2026-04-13 00:02:23.466578 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-13 00:02:23.466582 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.466591 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.466595 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466599 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.466602 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466606 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.466610 | orchestrator | } 2026-04-13 00:02:23.466649 | orchestrator | 2026-04-13 00:02:23.466660 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-13 00:02:23.466664 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.466668 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.466672 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466675 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.466679 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466683 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.466687 | orchestrator | } 2026-04-13 00:02:23.466724 | orchestrator | 2026-04-13 00:02:23.466735 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-13 00:02:23.466739 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.466743 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.466747 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466751 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.466754 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466758 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.466762 | orchestrator | } 2026-04-13 00:02:23.466799 | orchestrator | 2026-04-13 00:02:23.466810 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-13 00:02:23.466814 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.466818 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.466822 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466826 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.466830 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466834 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.466837 | orchestrator | } 2026-04-13 00:02:23.466872 | orchestrator | 2026-04-13 00:02:23.466883 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-13 00:02:23.466887 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.466891 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.466895 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466899 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.466905 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466909 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.466913 | orchestrator | } 2026-04-13 00:02:23.466946 | orchestrator | 2026-04-13 00:02:23.466957 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-13 00:02:23.466961 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.466965 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.466969 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.466973 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.466977 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.466980 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.466984 | orchestrator | } 2026-04-13 00:02:23.467022 | orchestrator | 2026-04-13 00:02:23.467034 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-13 00:02:23.467038 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.467042 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.467046 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.467049 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.467053 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.467063 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.467067 | orchestrator | } 2026-04-13 00:02:23.467103 | orchestrator | 2026-04-13 00:02:23.467114 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-13 00:02:23.467118 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.467122 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.467126 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.467130 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.467133 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.467137 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.467141 | orchestrator | } 2026-04-13 00:02:23.467190 | orchestrator | 2026-04-13 00:02:23.467206 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-13 00:02:23.467212 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.467218 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.467224 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.467230 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.467235 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.467241 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.467247 | orchestrator | } 2026-04-13 00:02:23.467305 | orchestrator | 2026-04-13 00:02:23.467323 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-13 00:02:23.467331 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-13 00:02:23.467337 | orchestrator | + fixed_ip = (known after apply) 2026-04-13 00:02:23.467343 | orchestrator | + floating_ip = (known after apply) 2026-04-13 00:02:23.467349 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.467355 | orchestrator | + port_id = (known after apply) 2026-04-13 00:02:23.467360 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.467367 | orchestrator | } 2026-04-13 00:02:23.467443 | orchestrator | 2026-04-13 00:02:23.467487 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-13 00:02:23.467493 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-13 00:02:23.467497 | orchestrator | + address = (known after apply) 2026-04-13 00:02:23.467501 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.467505 | orchestrator | + dns_domain = (known after apply) 2026-04-13 00:02:23.467509 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.467512 | orchestrator | + fixed_ip = (known after apply) 2026-04-13 00:02:23.467516 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.467520 | orchestrator | + pool = "public" 2026-04-13 00:02:23.467524 | orchestrator | + port_id = (known after apply) 2026-04-13 00:02:23.467528 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.467532 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.467536 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.467539 | orchestrator | } 2026-04-13 00:02:23.467639 | orchestrator | 2026-04-13 00:02:23.467651 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-13 00:02:23.467655 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-13 00:02:23.467659 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.467663 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.467667 | orchestrator | + availability_zone_hints = [ 2026-04-13 00:02:23.467671 | orchestrator | + "nova", 2026-04-13 00:02:23.467675 | orchestrator | ] 2026-04-13 00:02:23.467679 | orchestrator | + dns_domain = (known after apply) 2026-04-13 00:02:23.467683 | orchestrator | + external = (known after apply) 2026-04-13 00:02:23.467687 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.467690 | orchestrator | + mtu = (known after apply) 2026-04-13 00:02:23.467694 | orchestrator | + name = "net-testbed-management" 2026-04-13 00:02:23.467698 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.467708 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.467712 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.467716 | orchestrator | + shared = (known after apply) 2026-04-13 00:02:23.467720 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.467724 | orchestrator | + transparent_vlan = (known after apply) 2026-04-13 00:02:23.467727 | orchestrator | 2026-04-13 00:02:23.467731 | orchestrator | + segments (known after apply) 2026-04-13 00:02:23.467735 | orchestrator | } 2026-04-13 00:02:23.467860 | orchestrator | 2026-04-13 00:02:23.467873 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-13 00:02:23.467877 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-13 00:02:23.467881 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.467885 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.467888 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.467896 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.467900 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.467904 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.467908 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.467912 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.467915 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.467919 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.467923 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.467927 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.467931 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.467935 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.467938 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.467942 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.467946 | orchestrator | 2026-04-13 00:02:23.467950 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.467953 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.467957 | orchestrator | } 2026-04-13 00:02:23.467961 | orchestrator | 2026-04-13 00:02:23.467965 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.467969 | orchestrator | 2026-04-13 00:02:23.467973 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.467977 | orchestrator | + ip_address = "192.168.16.5" 2026-04-13 00:02:23.467981 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.467984 | orchestrator | } 2026-04-13 00:02:23.467988 | orchestrator | } 2026-04-13 00:02:23.468120 | orchestrator | 2026-04-13 00:02:23.468132 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-13 00:02:23.468136 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.468140 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.468144 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.468148 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.468151 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.468155 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.468159 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.468163 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.468167 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.468170 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.468174 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.468178 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.468182 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.468185 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.468189 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.468197 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.468201 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.468204 | orchestrator | 2026-04-13 00:02:23.468208 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.468212 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.468216 | orchestrator | } 2026-04-13 00:02:23.468220 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.468223 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.468227 | orchestrator | } 2026-04-13 00:02:23.468231 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.468235 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.468238 | orchestrator | } 2026-04-13 00:02:23.468242 | orchestrator | 2026-04-13 00:02:23.468246 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.468250 | orchestrator | 2026-04-13 00:02:23.468254 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.468258 | orchestrator | + ip_address = "192.168.16.10" 2026-04-13 00:02:23.468261 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.468265 | orchestrator | } 2026-04-13 00:02:23.468269 | orchestrator | } 2026-04-13 00:02:23.468493 | orchestrator | 2026-04-13 00:02:23.468517 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-13 00:02:23.468524 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.468530 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.468536 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.468541 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.468547 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.468554 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.468560 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.468566 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.468572 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.468578 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.468584 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.468590 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.468596 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.468600 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.468603 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.468607 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.468611 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.468615 | orchestrator | 2026-04-13 00:02:23.468618 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.468622 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.468626 | orchestrator | } 2026-04-13 00:02:23.468630 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.468634 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.468638 | orchestrator | } 2026-04-13 00:02:23.468641 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.468645 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.468649 | orchestrator | } 2026-04-13 00:02:23.468653 | orchestrator | 2026-04-13 00:02:23.468657 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.468661 | orchestrator | 2026-04-13 00:02:23.468664 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.468668 | orchestrator | + ip_address = "192.168.16.11" 2026-04-13 00:02:23.468672 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.468676 | orchestrator | } 2026-04-13 00:02:23.468680 | orchestrator | } 2026-04-13 00:02:23.470764 | orchestrator | 2026-04-13 00:02:23.470815 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-13 00:02:23.470821 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.470826 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.470830 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.470834 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.470838 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.470852 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.470856 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.470859 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.470863 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.470872 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.470876 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.470880 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.470884 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.470888 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.470891 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.470895 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.470899 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.470903 | orchestrator | 2026-04-13 00:02:23.470907 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.470911 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.470915 | orchestrator | } 2026-04-13 00:02:23.470919 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.470923 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.470926 | orchestrator | } 2026-04-13 00:02:23.470930 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.470934 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.470938 | orchestrator | } 2026-04-13 00:02:23.470941 | orchestrator | 2026-04-13 00:02:23.470945 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.470949 | orchestrator | 2026-04-13 00:02:23.470953 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.470957 | orchestrator | + ip_address = "192.168.16.12" 2026-04-13 00:02:23.470961 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.470965 | orchestrator | } 2026-04-13 00:02:23.470969 | orchestrator | } 2026-04-13 00:02:23.471114 | orchestrator | 2026-04-13 00:02:23.471126 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-13 00:02:23.471131 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.471135 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.471139 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.471142 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.471146 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.471150 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.471154 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.471158 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.471162 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.471165 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.471169 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.471173 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.471177 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.471181 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.471185 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.471188 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.471192 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.471196 | orchestrator | 2026-04-13 00:02:23.471200 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471204 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.471208 | orchestrator | } 2026-04-13 00:02:23.471212 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471216 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.471219 | orchestrator | } 2026-04-13 00:02:23.471223 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471227 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.471231 | orchestrator | } 2026-04-13 00:02:23.471235 | orchestrator | 2026-04-13 00:02:23.471242 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.471246 | orchestrator | 2026-04-13 00:02:23.471250 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.471254 | orchestrator | + ip_address = "192.168.16.13" 2026-04-13 00:02:23.471258 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.471262 | orchestrator | } 2026-04-13 00:02:23.471265 | orchestrator | } 2026-04-13 00:02:23.471433 | orchestrator | 2026-04-13 00:02:23.471447 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-13 00:02:23.471469 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.471473 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.471477 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.471481 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.471485 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.471489 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.471493 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.471497 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.471500 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.471504 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.471508 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.471512 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.471516 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.471520 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.471524 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.471527 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.471531 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.471537 | orchestrator | 2026-04-13 00:02:23.471541 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471545 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.471549 | orchestrator | } 2026-04-13 00:02:23.471552 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471556 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.471560 | orchestrator | } 2026-04-13 00:02:23.471564 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471568 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.471572 | orchestrator | } 2026-04-13 00:02:23.471575 | orchestrator | 2026-04-13 00:02:23.471579 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.471583 | orchestrator | 2026-04-13 00:02:23.471587 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.471591 | orchestrator | + ip_address = "192.168.16.14" 2026-04-13 00:02:23.471595 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.471598 | orchestrator | } 2026-04-13 00:02:23.471602 | orchestrator | } 2026-04-13 00:02:23.471746 | orchestrator | 2026-04-13 00:02:23.471758 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-13 00:02:23.471762 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.471766 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.471770 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.471774 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.471777 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.471781 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.471785 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.471789 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.471792 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.471796 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.471800 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.471804 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.471807 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.471811 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.471820 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.471824 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.471828 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.471831 | orchestrator | 2026-04-13 00:02:23.471835 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471839 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.471843 | orchestrator | } 2026-04-13 00:02:23.471847 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471850 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.471854 | orchestrator | } 2026-04-13 00:02:23.471858 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.471861 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.471865 | orchestrator | } 2026-04-13 00:02:23.471869 | orchestrator | 2026-04-13 00:02:23.471876 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.471880 | orchestrator | 2026-04-13 00:02:23.471884 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.471888 | orchestrator | + ip_address = "192.168.16.15" 2026-04-13 00:02:23.471892 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.471895 | orchestrator | } 2026-04-13 00:02:23.471899 | orchestrator | } 2026-04-13 00:02:23.471944 | orchestrator | 2026-04-13 00:02:23.471955 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-13 00:02:23.471959 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-13 00:02:23.471963 | orchestrator | + force_destroy = false 2026-04-13 00:02:23.471967 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.471971 | orchestrator | + port_id = (known after apply) 2026-04-13 00:02:23.471975 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.471978 | orchestrator | + router_id = (known after apply) 2026-04-13 00:02:23.471982 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.471986 | orchestrator | } 2026-04-13 00:02:23.472069 | orchestrator | 2026-04-13 00:02:23.472080 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-13 00:02:23.472084 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-13 00:02:23.472088 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.472092 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.472096 | orchestrator | + availability_zone_hints = [ 2026-04-13 00:02:23.472100 | orchestrator | + "nova", 2026-04-13 00:02:23.472104 | orchestrator | ] 2026-04-13 00:02:23.472108 | orchestrator | + distributed = (known after apply) 2026-04-13 00:02:23.472111 | orchestrator | + enable_snat = (known after apply) 2026-04-13 00:02:23.472115 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-13 00:02:23.472119 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-13 00:02:23.472123 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.472127 | orchestrator | + name = "testbed" 2026-04-13 00:02:23.472131 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.472134 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.472138 | orchestrator | 2026-04-13 00:02:23.472142 | orchestrator | + external_fixed_ip (known after apply) 2026-04-13 00:02:23.472146 | orchestrator | } 2026-04-13 00:02:23.472226 | orchestrator | 2026-04-13 00:02:23.472237 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-13 00:02:23.472242 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-13 00:02:23.472246 | orchestrator | + description = "ssh" 2026-04-13 00:02:23.472250 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.472254 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.472257 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.472261 | orchestrator | + port_range_max = 22 2026-04-13 00:02:23.472265 | orchestrator | + port_range_min = 22 2026-04-13 00:02:23.472269 | orchestrator | + protocol = "tcp" 2026-04-13 00:02:23.472273 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.472280 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.472284 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.472288 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.472291 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.472295 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.472299 | orchestrator | } 2026-04-13 00:02:23.472375 | orchestrator | 2026-04-13 00:02:23.472386 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-13 00:02:23.472391 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-13 00:02:23.472394 | orchestrator | + description = "wireguard" 2026-04-13 00:02:23.472398 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.472402 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.472406 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.472409 | orchestrator | + port_range_max = 51820 2026-04-13 00:02:23.472413 | orchestrator | + port_range_min = 51820 2026-04-13 00:02:23.472417 | orchestrator | + protocol = "udp" 2026-04-13 00:02:23.472421 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.472425 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.472428 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.472432 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.472436 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.472440 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.472444 | orchestrator | } 2026-04-13 00:02:23.472545 | orchestrator | 2026-04-13 00:02:23.472558 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-13 00:02:23.472563 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-13 00:02:23.472567 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.472570 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.472574 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.472578 | orchestrator | + protocol = "tcp" 2026-04-13 00:02:23.472582 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.472585 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.472589 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.472593 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-13 00:02:23.472597 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.472600 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.472604 | orchestrator | } 2026-04-13 00:02:23.472665 | orchestrator | 2026-04-13 00:02:23.472676 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-13 00:02:23.472680 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-13 00:02:23.472684 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.472688 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.472692 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.472695 | orchestrator | + protocol = "udp" 2026-04-13 00:02:23.472699 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.472703 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.472707 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.472711 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-13 00:02:23.472714 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.472718 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.472722 | orchestrator | } 2026-04-13 00:02:23.472782 | orchestrator | 2026-04-13 00:02:23.472794 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-13 00:02:23.472802 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-13 00:02:23.472806 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.472810 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.472814 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.472817 | orchestrator | + protocol = "icmp" 2026-04-13 00:02:23.472821 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.472825 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.472829 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.472832 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.472836 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.472840 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.472844 | orchestrator | } 2026-04-13 00:02:23.472905 | orchestrator | 2026-04-13 00:02:23.472916 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-13 00:02:23.472921 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-13 00:02:23.472925 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.472928 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.472932 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.472936 | orchestrator | + protocol = "tcp" 2026-04-13 00:02:23.472940 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.472943 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.472950 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.472954 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.472964 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.472967 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.472971 | orchestrator | } 2026-04-13 00:02:23.473033 | orchestrator | 2026-04-13 00:02:23.473045 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-13 00:02:23.473049 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-13 00:02:23.473053 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.473057 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.473061 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.473064 | orchestrator | + protocol = "udp" 2026-04-13 00:02:23.473068 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.473072 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.473076 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.473079 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.473083 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.473087 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.473091 | orchestrator | } 2026-04-13 00:02:23.473153 | orchestrator | 2026-04-13 00:02:23.473165 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-13 00:02:23.473169 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-13 00:02:23.473173 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.473182 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.473186 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.473190 | orchestrator | + protocol = "icmp" 2026-04-13 00:02:23.473194 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.473198 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.473202 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.473205 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.473209 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.473213 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.473220 | orchestrator | } 2026-04-13 00:02:23.473291 | orchestrator | 2026-04-13 00:02:23.473302 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-13 00:02:23.473307 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-13 00:02:23.473310 | orchestrator | + description = "vrrp" 2026-04-13 00:02:23.473314 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.473318 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.473322 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.473326 | orchestrator | + protocol = "112" 2026-04-13 00:02:23.473329 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.473333 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.473337 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.473341 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.473344 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.473348 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.473352 | orchestrator | } 2026-04-13 00:02:23.473398 | orchestrator | 2026-04-13 00:02:23.473409 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-13 00:02:23.473414 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-13 00:02:23.473417 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.473421 | orchestrator | + description = "management security group" 2026-04-13 00:02:23.473425 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.473429 | orchestrator | + name = "testbed-management" 2026-04-13 00:02:23.473433 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.473436 | orchestrator | + stateful = (known after apply) 2026-04-13 00:02:23.473440 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.473444 | orchestrator | } 2026-04-13 00:02:23.473505 | orchestrator | 2026-04-13 00:02:23.473517 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-13 00:02:23.473521 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-13 00:02:23.473525 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.473529 | orchestrator | + description = "node security group" 2026-04-13 00:02:23.473533 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.473537 | orchestrator | + name = "testbed-node" 2026-04-13 00:02:23.473540 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.473544 | orchestrator | + stateful = (known after apply) 2026-04-13 00:02:23.473548 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.473552 | orchestrator | } 2026-04-13 00:02:23.481096 | orchestrator | 2026-04-13 00:02:23.481156 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-13 00:02:23.481167 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-13 00:02:23.481174 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.481181 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-13 00:02:23.481187 | orchestrator | + dns_nameservers = [ 2026-04-13 00:02:23.481195 | orchestrator | + "8.8.8.8", 2026-04-13 00:02:23.481201 | orchestrator | + "9.9.9.9", 2026-04-13 00:02:23.481207 | orchestrator | ] 2026-04-13 00:02:23.481214 | orchestrator | + enable_dhcp = true 2026-04-13 00:02:23.481220 | orchestrator | + gateway_ip = (known after apply) 2026-04-13 00:02:23.481227 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.481234 | orchestrator | + ip_version = 4 2026-04-13 00:02:23.481240 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-13 00:02:23.481247 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-13 00:02:23.481254 | orchestrator | + name = "subnet-testbed-management" 2026-04-13 00:02:23.481261 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.481267 | orchestrator | + no_gateway = false 2026-04-13 00:02:23.481273 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.481279 | orchestrator | + service_types = (known after apply) 2026-04-13 00:02:23.481295 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.481299 | orchestrator | 2026-04-13 00:02:23.481303 | orchestrator | + allocation_pool { 2026-04-13 00:02:23.481308 | orchestrator | + end = "192.168.31.250" 2026-04-13 00:02:23.481311 | orchestrator | + start = "192.168.31.200" 2026-04-13 00:02:23.481315 | orchestrator | } 2026-04-13 00:02:23.481319 | orchestrator | } 2026-04-13 00:02:23.481323 | orchestrator | 2026-04-13 00:02:23.481327 | orchestrator | # terraform_data.image will be created 2026-04-13 00:02:23.481331 | orchestrator | + resource "terraform_data" "image" { 2026-04-13 00:02:23.481335 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.481339 | orchestrator | + input = "Ubuntu 24.04" 2026-04-13 00:02:23.481343 | orchestrator | + output = (known after apply) 2026-04-13 00:02:23.481347 | orchestrator | } 2026-04-13 00:02:23.481351 | orchestrator | 2026-04-13 00:02:23.481354 | orchestrator | # terraform_data.image_node will be created 2026-04-13 00:02:23.481358 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-13 00:02:23.481362 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.481366 | orchestrator | + input = "Ubuntu 24.04" 2026-04-13 00:02:23.481370 | orchestrator | + output = (known after apply) 2026-04-13 00:02:23.481374 | orchestrator | } 2026-04-13 00:02:23.481377 | orchestrator | 2026-04-13 00:02:23.481381 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-13 00:02:23.481385 | orchestrator | 2026-04-13 00:02:23.481390 | orchestrator | Changes to Outputs: 2026-04-13 00:02:23.481396 | orchestrator | + manager_address = (sensitive value) 2026-04-13 00:02:23.481402 | orchestrator | + private_key = (sensitive value) 2026-04-13 00:02:23.666836 | orchestrator | terraform_data.image: Creating... 2026-04-13 00:02:23.666884 | orchestrator | terraform_data.image: Creation complete after 0s [id=c1615d6c-3a66-283c-71e4-7197341b2621] 2026-04-13 00:02:23.752968 | orchestrator | terraform_data.image_node: Creating... 2026-04-13 00:02:23.754785 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=2b75c4cc-d26c-8d91-dc61-c0d64c66e099] 2026-04-13 00:02:23.774156 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-13 00:02:23.774212 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-13 00:02:23.778086 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-13 00:02:23.778134 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-13 00:02:23.779480 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-13 00:02:23.797587 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-13 00:02:23.802068 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-13 00:02:23.802112 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-13 00:02:23.802118 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-13 00:02:23.802124 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-13 00:02:24.257108 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-13 00:02:24.265994 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-13 00:02:24.269069 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-13 00:02:24.273649 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-13 00:02:24.400386 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-13 00:02:24.404009 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-13 00:02:25.053069 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=730db103-6b70-4a84-aa9e-192523d43e54] 2026-04-13 00:02:25.060592 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-13 00:02:27.666110 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=36e0079f-b8cc-463e-a3d4-692b22821d05] 2026-04-13 00:02:27.678493 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-13 00:02:27.711869 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e] 2026-04-13 00:02:27.714371 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-13 00:02:27.760578 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=210099df-3e7f-48c2-8d6b-572e8a7c1923] 2026-04-13 00:02:27.765589 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-13 00:02:27.767590 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=9561ecc7-53f2-4f93-a506-8a94937d6a2f] 2026-04-13 00:02:27.768652 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=8eda79f4-f653-48ca-bc7b-44aba519c194] 2026-04-13 00:02:27.772385 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-13 00:02:27.772425 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-13 00:02:27.846513 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=7036bc7f-1d9f-4bbc-89ec-79faed4557a7] 2026-04-13 00:02:27.855952 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-13 00:02:27.991923 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=0679126a-4000-4d61-a7db-c334b9d13f77] 2026-04-13 00:02:28.005258 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-13 00:02:28.011241 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=18da8c5c88942723eb3481e97775aeb6d126acb7] 2026-04-13 00:02:28.023892 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-13 00:02:28.029152 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=fe694ba963261b4b230ef5c04470157808b5c55f] 2026-04-13 00:02:28.036694 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-13 00:02:28.075976 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=2beae69f-4f2c-4ffb-b1cc-4fe56058469a] 2026-04-13 00:02:28.230805 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=64ba95e0-52ec-4080-a400-33c71893d605] 2026-04-13 00:02:28.528115 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=52a3323e-96a7-47fd-a72f-5f8ab9011881] 2026-04-13 00:02:29.001171 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=f2a4ea96-e301-4177-8df3-4b6b57f2f587] 2026-04-13 00:02:29.017744 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-13 00:02:31.288133 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=4039d428-e5d1-48e6-9940-0f36e423ec3a] 2026-04-13 00:02:31.316327 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7] 2026-04-13 00:02:31.337253 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=1b3da9c0-5113-4abf-81e6-0eb99113ad06] 2026-04-13 00:02:31.424660 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=a7dd9f71-8adc-487c-8257-2cef985b8ae9] 2026-04-13 00:02:31.484776 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=cc9058a9-513c-44a1-a232-346d8ffae651] 2026-04-13 00:02:31.609632 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=194ebb96-0dba-4c24-aa8b-2b193008c6b3] 2026-04-13 00:02:35.341666 | orchestrator | openstack_networking_router_v2.router: Creation complete after 6s [id=33f18f49-1042-44ed-b29d-9b3c0b85afd6] 2026-04-13 00:02:35.345740 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-13 00:02:35.348431 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-13 00:02:35.348897 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-13 00:02:35.607282 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=e92947e8-86e8-41fe-8a1e-51e57de2cd8b] 2026-04-13 00:02:35.624357 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-13 00:02:35.631049 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-13 00:02:35.631103 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-13 00:02:35.634993 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-13 00:02:35.635036 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-13 00:02:35.638133 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-13 00:02:35.639254 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-13 00:02:35.639874 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-13 00:02:35.828624 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=33404ab7-3541-454f-ae53-8f044450e671] 2026-04-13 00:02:35.839999 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-13 00:02:35.867730 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=126dc81a-c640-49f1-ba09-adb98527124e] 2026-04-13 00:02:35.881417 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-13 00:02:36.502425 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3e5f264c-ab3e-43e9-82b8-3e26faeb93f3] 2026-04-13 00:02:36.509339 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-13 00:02:36.589863 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=a82a4cc7-9b4e-4ff9-b917-35a656e01585] 2026-04-13 00:02:36.596671 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-13 00:02:36.699353 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=9b7e3c61-7a2c-4a21-a35d-ccb5a7e52e8d] 2026-04-13 00:02:36.702783 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-13 00:02:36.739118 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=7b550290-8b16-48d0-8e5f-36c002d3b12e] 2026-04-13 00:02:36.743204 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-13 00:02:37.163074 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3b959fb7-1d6e-41ae-a3d1-9aa3395008fa] 2026-04-13 00:02:37.490263 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-13 00:02:37.490333 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=3a4daab2-8f45-4e44-957d-f564b7058cdf] 2026-04-13 00:02:37.490347 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-13 00:02:37.490358 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=070504fb-1f71-41f7-a745-d63b34ce79a9] 2026-04-13 00:02:37.657051 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=2123b620-4caa-4be2-81f7-f3c32c37d552] 2026-04-13 00:02:37.829553 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=c4e1fd38-8862-4aa3-9ddb-0419ae6d6c4a] 2026-04-13 00:02:37.832163 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=480a71ba-71f6-49d0-a616-1fd1d83afe3f] 2026-04-13 00:02:37.861160 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=485bf903-1938-4ec4-b03e-30006275ef89] 2026-04-13 00:02:38.101714 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6e0c8404-f369-4706-9703-42c5500e5931] 2026-04-13 00:02:38.323372 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=1b3ebee7-204f-4e64-b2d5-a6cab1b765f3] 2026-04-13 00:02:38.423563 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=78913dc0-7dd6-4129-80d6-4f199f9aea9c] 2026-04-13 00:02:38.516491 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 3s [id=e2020c3d-427f-43e7-911d-bbb1fe6f35e0] 2026-04-13 00:02:43.571255 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 9s [id=ec8bf714-e0ca-4697-8339-00ab8c05f8b4] 2026-04-13 00:02:43.597851 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-13 00:02:43.604349 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-13 00:02:43.605646 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-13 00:02:43.615862 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-13 00:02:43.616253 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-13 00:02:43.618818 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-13 00:02:43.626330 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-13 00:02:45.245807 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=33406c86-e56c-4c45-b70b-9e30a6923abf] 2026-04-13 00:02:45.260400 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-13 00:02:45.265766 | orchestrator | local_file.inventory: Creating... 2026-04-13 00:02:45.267846 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-13 00:02:45.270861 | orchestrator | local_file.inventory: Creation complete after 0s [id=278293e87930c5c9fa6f44ab27a8badff59174b1] 2026-04-13 00:02:45.273628 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=85a475a558145c5c1daaf8020d251becc77f9b73] 2026-04-13 00:02:46.173888 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=33406c86-e56c-4c45-b70b-9e30a6923abf] 2026-04-13 00:02:53.610946 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-13 00:02:53.611163 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-13 00:02:53.619252 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-13 00:02:53.625721 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-13 00:02:53.625786 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-13 00:02:53.626820 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-13 00:03:03.618343 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-13 00:03:03.618520 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-13 00:03:03.619569 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-13 00:03:03.625930 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-13 00:03:03.625982 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-13 00:03:03.627198 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-13 00:03:13.627393 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-13 00:03:13.627575 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-13 00:03:13.627605 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-13 00:03:13.627625 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-13 00:03:13.627643 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-13 00:03:13.627682 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-13 00:03:14.424912 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=bc935d1e-440d-423f-9cae-34b898f010ac] 2026-04-13 00:03:14.492359 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=27f5f9ca-f1dd-4a44-afe7-f9725c53144e] 2026-04-13 00:03:23.636089 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-13 00:03:23.636194 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-13 00:03:23.636207 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-13 00:03:23.636242 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-13 00:03:33.644960 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-13 00:03:33.645065 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-13 00:03:33.645082 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-04-13 00:03:33.645094 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-13 00:03:34.988220 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=90bc75d8-37e6-494e-9edd-3e38c68818e5] 2026-04-13 00:03:35.085781 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=caefe757-6767-4ba1-99b9-6433018e6e78] 2026-04-13 00:03:35.203543 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=062ffc29-0019-45a1-8728-c2b6afe63d98] 2026-04-13 00:03:43.650263 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-04-13 00:03:45.386336 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m1s [id=734ecf4b-4867-403e-8075-1ee6d4a1c13b] 2026-04-13 00:03:45.393004 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-13 00:03:45.411457 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-13 00:03:45.421876 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8167420332233038091] 2026-04-13 00:03:45.428574 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-13 00:03:45.433251 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-13 00:03:45.433929 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-13 00:03:45.437725 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-13 00:03:45.438168 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-13 00:03:45.438393 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-13 00:03:45.460197 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-13 00:03:45.464400 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-13 00:03:45.482746 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-13 00:03:48.804638 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=90bc75d8-37e6-494e-9edd-3e38c68818e5/210099df-3e7f-48c2-8d6b-572e8a7c1923] 2026-04-13 00:03:48.807405 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=caefe757-6767-4ba1-99b9-6433018e6e78/9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e] 2026-04-13 00:03:48.836345 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=90bc75d8-37e6-494e-9edd-3e38c68818e5/7036bc7f-1d9f-4bbc-89ec-79faed4557a7] 2026-04-13 00:03:48.844262 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=bc935d1e-440d-423f-9cae-34b898f010ac/36e0079f-b8cc-463e-a3d4-692b22821d05] 2026-04-13 00:03:48.871217 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=caefe757-6767-4ba1-99b9-6433018e6e78/8eda79f4-f653-48ca-bc7b-44aba519c194] 2026-04-13 00:03:49.044032 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=bc935d1e-440d-423f-9cae-34b898f010ac/9561ecc7-53f2-4f93-a506-8a94937d6a2f] 2026-04-13 00:03:54.948231 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=caefe757-6767-4ba1-99b9-6433018e6e78/64ba95e0-52ec-4080-a400-33c71893d605] 2026-04-13 00:03:54.962986 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=90bc75d8-37e6-494e-9edd-3e38c68818e5/2beae69f-4f2c-4ffb-b1cc-4fe56058469a] 2026-04-13 00:03:55.168539 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=bc935d1e-440d-423f-9cae-34b898f010ac/0679126a-4000-4d61-a7db-c334b9d13f77] 2026-04-13 00:03:55.486431 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-13 00:04:05.488537 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-13 00:04:06.182333 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=aee4d708-53c5-4725-82fe-c70f9aabe671] 2026-04-13 00:04:06.231707 | orchestrator | 2026-04-13 00:04:06.231822 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-13 00:04:06.231842 | orchestrator | 2026-04-13 00:04:06.231856 | orchestrator | Outputs: 2026-04-13 00:04:06.231869 | orchestrator | 2026-04-13 00:04:06.231882 | orchestrator | manager_address = 2026-04-13 00:04:06.231895 | orchestrator | private_key = 2026-04-13 00:04:06.575199 | orchestrator | ok: Runtime: 0:01:49.561930 2026-04-13 00:04:06.608137 | 2026-04-13 00:04:06.608287 | TASK [Create infrastructure (stable)] 2026-04-13 00:04:07.143583 | orchestrator | skipping: Conditional result was False 2026-04-13 00:04:07.162123 | 2026-04-13 00:04:07.162336 | TASK [Fetch manager address] 2026-04-13 00:04:07.647371 | orchestrator | ok 2026-04-13 00:04:07.657179 | 2026-04-13 00:04:07.657340 | TASK [Set manager_host address] 2026-04-13 00:04:07.760608 | orchestrator | ok 2026-04-13 00:04:07.767956 | 2026-04-13 00:04:07.768068 | LOOP [Update ansible collections] 2026-04-13 00:04:08.921701 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:04:08.921989 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-13 00:04:08.922054 | orchestrator | Starting galaxy collection install process 2026-04-13 00:04:08.922082 | orchestrator | Process install dependency map 2026-04-13 00:04:08.922111 | orchestrator | Starting collection install process 2026-04-13 00:04:08.922144 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-13 00:04:08.922173 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-13 00:04:08.922203 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-13 00:04:08.922269 | orchestrator | ok: Item: commons Runtime: 0:00:00.738612 2026-04-13 00:04:09.875101 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-13 00:04:09.875287 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:04:09.875340 | orchestrator | Starting galaxy collection install process 2026-04-13 00:04:09.875380 | orchestrator | Process install dependency map 2026-04-13 00:04:09.875416 | orchestrator | Starting collection install process 2026-04-13 00:04:09.875452 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-13 00:04:09.875487 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-13 00:04:09.875520 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-13 00:04:09.875573 | orchestrator | ok: Item: services Runtime: 0:00:00.680976 2026-04-13 00:04:09.889743 | 2026-04-13 00:04:09.889877 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-13 00:04:20.517027 | orchestrator | ok 2026-04-13 00:04:20.527528 | 2026-04-13 00:04:20.527651 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-13 00:05:20.565411 | orchestrator | ok 2026-04-13 00:05:20.577115 | 2026-04-13 00:05:20.577395 | TASK [Fetch manager ssh hostkey] 2026-04-13 00:05:22.158972 | orchestrator | Output suppressed because no_log was given 2026-04-13 00:05:22.174778 | 2026-04-13 00:05:22.174964 | TASK [Get ssh keypair from terraform environment] 2026-04-13 00:05:22.710751 | orchestrator | ok: Runtime: 0:00:00.011484 2026-04-13 00:05:22.730409 | 2026-04-13 00:05:22.730563 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-13 00:05:22.762039 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-13 00:05:22.768845 | 2026-04-13 00:05:22.768955 | TASK [Run manager part 0] 2026-04-13 00:05:23.752002 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:05:23.806953 | orchestrator | 2026-04-13 00:05:23.807021 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-13 00:05:23.807033 | orchestrator | 2026-04-13 00:05:23.807051 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-13 00:05:25.838071 | orchestrator | ok: [testbed-manager] 2026-04-13 00:05:25.838131 | orchestrator | 2026-04-13 00:05:25.838156 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-13 00:05:25.838167 | orchestrator | 2026-04-13 00:05:25.838178 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:05:27.889713 | orchestrator | ok: [testbed-manager] 2026-04-13 00:05:27.889784 | orchestrator | 2026-04-13 00:05:27.889792 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-13 00:05:28.594475 | orchestrator | ok: [testbed-manager] 2026-04-13 00:05:28.594575 | orchestrator | 2026-04-13 00:05:28.594590 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-13 00:05:28.653929 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:05:28.653991 | orchestrator | 2026-04-13 00:05:28.654002 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-13 00:05:28.700376 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:05:28.700452 | orchestrator | 2026-04-13 00:05:28.700464 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-13 00:05:28.742295 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:05:28.742389 | orchestrator | 2026-04-13 00:05:28.742406 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-13 00:05:29.555341 | orchestrator | changed: [testbed-manager] 2026-04-13 00:05:29.555412 | orchestrator | 2026-04-13 00:05:29.555424 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-13 00:08:34.119736 | orchestrator | changed: [testbed-manager] 2026-04-13 00:08:34.119810 | orchestrator | 2026-04-13 00:08:34.119821 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-13 00:09:58.441416 | orchestrator | changed: [testbed-manager] 2026-04-13 00:09:58.441483 | orchestrator | 2026-04-13 00:09:58.441503 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-13 00:10:26.732107 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:26.732205 | orchestrator | 2026-04-13 00:10:26.732221 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-13 00:10:36.459579 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:36.459716 | orchestrator | 2026-04-13 00:10:36.459742 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-13 00:10:36.512210 | orchestrator | ok: [testbed-manager] 2026-04-13 00:10:36.512259 | orchestrator | 2026-04-13 00:10:36.512272 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-13 00:10:37.319903 | orchestrator | ok: [testbed-manager] 2026-04-13 00:10:37.319938 | orchestrator | 2026-04-13 00:10:37.319943 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-13 00:10:38.119469 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:38.119572 | orchestrator | 2026-04-13 00:10:38.119604 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-13 00:10:44.901162 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:44.901377 | orchestrator | 2026-04-13 00:10:44.901398 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-13 00:10:51.022148 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:51.022191 | orchestrator | 2026-04-13 00:10:51.022200 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-13 00:10:53.832078 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:53.832176 | orchestrator | 2026-04-13 00:10:53.832192 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-13 00:10:55.654365 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:55.654406 | orchestrator | 2026-04-13 00:10:55.654415 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-13 00:10:56.768585 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-13 00:10:56.768640 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-13 00:10:56.768763 | orchestrator | 2026-04-13 00:10:56.768774 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-13 00:10:56.812217 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-13 00:10:56.812308 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-13 00:10:56.812321 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-13 00:10:56.812333 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-13 00:11:00.159206 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-13 00:11:00.159300 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-13 00:11:00.159316 | orchestrator | 2026-04-13 00:11:00.159328 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-13 00:11:00.745808 | orchestrator | changed: [testbed-manager] 2026-04-13 00:11:00.745898 | orchestrator | 2026-04-13 00:11:00.745914 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-13 00:12:21.542396 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-13 00:12:21.542509 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-13 00:12:21.542525 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-13 00:12:21.542536 | orchestrator | 2026-04-13 00:12:21.542571 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-13 00:12:23.939921 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-13 00:12:23.940190 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-13 00:12:23.940216 | orchestrator | 2026-04-13 00:12:23.940233 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-13 00:12:23.940247 | orchestrator | 2026-04-13 00:12:23.940260 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:12:25.943208 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:25.943312 | orchestrator | 2026-04-13 00:12:25.943341 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-13 00:12:25.995241 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:25.995303 | orchestrator | 2026-04-13 00:12:25.995319 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-13 00:12:26.068598 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:26.068688 | orchestrator | 2026-04-13 00:12:26.068706 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-13 00:12:26.927931 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:26.928015 | orchestrator | 2026-04-13 00:12:26.928031 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-13 00:12:27.736042 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:27.736110 | orchestrator | 2026-04-13 00:12:27.736120 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-13 00:12:29.155224 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-13 00:12:29.155317 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-13 00:12:29.155332 | orchestrator | 2026-04-13 00:12:29.155346 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-13 00:12:30.687164 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:30.687222 | orchestrator | 2026-04-13 00:12:30.687236 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-13 00:12:32.462202 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:12:32.462265 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-13 00:12:32.462295 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:12:32.462307 | orchestrator | 2026-04-13 00:12:32.462320 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-13 00:12:32.523715 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:32.523781 | orchestrator | 2026-04-13 00:12:32.523789 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-13 00:12:32.587801 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:32.587882 | orchestrator | 2026-04-13 00:12:32.587895 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-13 00:12:33.178195 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:33.178235 | orchestrator | 2026-04-13 00:12:33.178243 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-13 00:12:33.257067 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:33.257106 | orchestrator | 2026-04-13 00:12:33.257114 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-13 00:12:34.158435 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:12:34.158487 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:34.158497 | orchestrator | 2026-04-13 00:12:34.158504 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-13 00:12:34.196738 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:34.196792 | orchestrator | 2026-04-13 00:12:34.196804 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-13 00:12:34.231191 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:34.231257 | orchestrator | 2026-04-13 00:12:34.231267 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-13 00:12:34.268778 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:34.268839 | orchestrator | 2026-04-13 00:12:34.268849 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-13 00:12:34.342992 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:34.343077 | orchestrator | 2026-04-13 00:12:34.343092 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-13 00:12:35.087561 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:35.087652 | orchestrator | 2026-04-13 00:12:35.087677 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-13 00:12:35.087693 | orchestrator | 2026-04-13 00:12:35.087707 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:12:36.470286 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:36.470331 | orchestrator | 2026-04-13 00:12:36.470337 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-13 00:12:37.436210 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:37.436294 | orchestrator | 2026-04-13 00:12:37.436310 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:12:37.436323 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-13 00:12:37.436334 | orchestrator | 2026-04-13 00:12:38.068042 | orchestrator | ok: Runtime: 0:07:14.469948 2026-04-13 00:12:38.086522 | 2026-04-13 00:12:38.086706 | TASK [Point out that the log in on the manager is now possible] 2026-04-13 00:12:38.127807 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-13 00:12:38.137943 | 2026-04-13 00:12:38.138090 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-13 00:12:38.174467 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-13 00:12:38.184405 | 2026-04-13 00:12:38.184616 | TASK [Run manager part 1 + 2] 2026-04-13 00:12:39.035970 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:12:39.096163 | orchestrator | 2026-04-13 00:12:39.096280 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-13 00:12:39.096300 | orchestrator | 2026-04-13 00:12:39.096339 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:12:42.142350 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:42.142602 | orchestrator | 2026-04-13 00:12:42.142673 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-13 00:12:42.179630 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:42.179722 | orchestrator | 2026-04-13 00:12:42.179742 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-13 00:12:42.214434 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:42.214589 | orchestrator | 2026-04-13 00:12:42.214616 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-13 00:12:42.247620 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:42.247698 | orchestrator | 2026-04-13 00:12:42.247713 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-13 00:12:42.327246 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:42.327351 | orchestrator | 2026-04-13 00:12:42.327370 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-13 00:12:42.386940 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:42.387020 | orchestrator | 2026-04-13 00:12:42.387036 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-13 00:12:42.449321 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-13 00:12:42.449412 | orchestrator | 2026-04-13 00:12:42.449427 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-13 00:12:43.169375 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:43.169460 | orchestrator | 2026-04-13 00:12:43.169478 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-13 00:12:43.224674 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:43.224726 | orchestrator | 2026-04-13 00:12:43.224734 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-13 00:12:44.643805 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:44.643867 | orchestrator | 2026-04-13 00:12:44.643878 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-13 00:12:45.262918 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:45.262964 | orchestrator | 2026-04-13 00:12:45.262970 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-13 00:12:46.434510 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:46.434621 | orchestrator | 2026-04-13 00:12:46.434641 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-13 00:13:02.595355 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:02.595436 | orchestrator | 2026-04-13 00:13:02.595456 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-13 00:13:03.349620 | orchestrator | ok: [testbed-manager] 2026-04-13 00:13:03.349728 | orchestrator | 2026-04-13 00:13:03.349759 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-13 00:13:03.441394 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:13:03.441485 | orchestrator | 2026-04-13 00:13:03.441501 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-13 00:13:04.477672 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:04.477724 | orchestrator | 2026-04-13 00:13:04.477732 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-13 00:13:05.494286 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:05.494378 | orchestrator | 2026-04-13 00:13:05.494395 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-13 00:13:06.114065 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:06.114167 | orchestrator | 2026-04-13 00:13:06.114197 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-13 00:13:06.154152 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-13 00:13:06.154310 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-13 00:13:06.154333 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-13 00:13:06.154350 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-13 00:13:08.225993 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:08.226142 | orchestrator | 2026-04-13 00:13:08.226170 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-13 00:13:18.139265 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-13 00:13:18.139337 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-13 00:13:18.139355 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-13 00:13:18.139367 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-13 00:13:18.139386 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-13 00:13:18.139398 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-13 00:13:18.139409 | orchestrator | 2026-04-13 00:13:18.139421 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-13 00:13:19.163566 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:19.163676 | orchestrator | 2026-04-13 00:13:19.163703 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-13 00:13:22.454542 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:22.454605 | orchestrator | 2026-04-13 00:13:22.454612 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-13 00:13:22.502280 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:13:22.502386 | orchestrator | 2026-04-13 00:13:22.502403 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-13 00:15:09.774811 | orchestrator | changed: [testbed-manager] 2026-04-13 00:15:09.774839 | orchestrator | 2026-04-13 00:15:09.774844 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-13 00:15:10.789182 | orchestrator | ok: [testbed-manager] 2026-04-13 00:15:10.789216 | orchestrator | 2026-04-13 00:15:10.789222 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:15:10.789228 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-13 00:15:10.789233 | orchestrator | 2026-04-13 00:15:11.321777 | orchestrator | ok: Runtime: 0:02:32.356586 2026-04-13 00:15:11.339154 | 2026-04-13 00:15:11.339332 | TASK [Reboot manager] 2026-04-13 00:15:12.873987 | orchestrator | ok: Runtime: 0:00:00.924446 2026-04-13 00:15:12.889030 | 2026-04-13 00:15:12.889174 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-13 00:15:30.517203 | orchestrator | ok 2026-04-13 00:15:30.524871 | 2026-04-13 00:15:30.524983 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-13 00:16:30.573586 | orchestrator | ok 2026-04-13 00:16:30.585681 | 2026-04-13 00:16:30.585819 | TASK [Deploy manager + bootstrap nodes] 2026-04-13 00:16:33.138255 | orchestrator | 2026-04-13 00:16:33.138445 | orchestrator | # DEPLOY MANAGER 2026-04-13 00:16:33.138469 | orchestrator | 2026-04-13 00:16:33.138483 | orchestrator | + set -e 2026-04-13 00:16:33.138497 | orchestrator | + echo 2026-04-13 00:16:33.138511 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-13 00:16:33.138528 | orchestrator | + echo 2026-04-13 00:16:33.138579 | orchestrator | + cat /opt/manager-vars.sh 2026-04-13 00:16:33.142987 | orchestrator | export NUMBER_OF_NODES=6 2026-04-13 00:16:33.143029 | orchestrator | 2026-04-13 00:16:33.143041 | orchestrator | export CEPH_VERSION=reef 2026-04-13 00:16:33.143054 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-13 00:16:33.143066 | orchestrator | export MANAGER_VERSION=latest 2026-04-13 00:16:33.143089 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-13 00:16:33.143100 | orchestrator | 2026-04-13 00:16:33.143118 | orchestrator | export ARA=false 2026-04-13 00:16:33.143130 | orchestrator | export DEPLOY_MODE=manager 2026-04-13 00:16:33.143197 | orchestrator | export TEMPEST=true 2026-04-13 00:16:33.143211 | orchestrator | export IS_ZUUL=true 2026-04-13 00:16:33.143222 | orchestrator | 2026-04-13 00:16:33.143240 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 00:16:33.143252 | orchestrator | export EXTERNAL_API=false 2026-04-13 00:16:33.143263 | orchestrator | 2026-04-13 00:16:33.143273 | orchestrator | export IMAGE_USER=ubuntu 2026-04-13 00:16:33.143288 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:33.143299 | orchestrator | 2026-04-13 00:16:33.143310 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-13 00:16:33.143329 | orchestrator | 2026-04-13 00:16:33.143340 | orchestrator | + echo 2026-04-13 00:16:33.143353 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 00:16:33.144353 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 00:16:33.144390 | orchestrator | ++ INTERACTIVE=false 2026-04-13 00:16:33.144402 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 00:16:33.144414 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 00:16:33.144425 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 00:16:33.144435 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 00:16:33.144446 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 00:16:33.144457 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 00:16:33.144467 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 00:16:33.144479 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 00:16:33.144489 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 00:16:33.144500 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 00:16:33.144523 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 00:16:33.144697 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 00:16:33.144734 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 00:16:33.144754 | orchestrator | ++ export ARA=false 2026-04-13 00:16:33.144773 | orchestrator | ++ ARA=false 2026-04-13 00:16:33.144792 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 00:16:33.144806 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 00:16:33.144822 | orchestrator | ++ export TEMPEST=true 2026-04-13 00:16:33.144833 | orchestrator | ++ TEMPEST=true 2026-04-13 00:16:33.144844 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 00:16:33.144854 | orchestrator | ++ IS_ZUUL=true 2026-04-13 00:16:33.144865 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 00:16:33.144876 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 00:16:33.144887 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 00:16:33.144898 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 00:16:33.144908 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 00:16:33.144919 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 00:16:33.144930 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:33.144940 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:33.144952 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 00:16:33.144962 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 00:16:33.144974 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-13 00:16:33.193709 | orchestrator | + docker version 2026-04-13 00:16:33.306244 | orchestrator | Client: Docker Engine - Community 2026-04-13 00:16:33.306347 | orchestrator | Version: 27.5.1 2026-04-13 00:16:33.306364 | orchestrator | API version: 1.47 2026-04-13 00:16:33.306378 | orchestrator | Go version: go1.22.11 2026-04-13 00:16:33.306389 | orchestrator | Git commit: 9f9e405 2026-04-13 00:16:33.306400 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-13 00:16:33.306412 | orchestrator | OS/Arch: linux/amd64 2026-04-13 00:16:33.306424 | orchestrator | Context: default 2026-04-13 00:16:33.306435 | orchestrator | 2026-04-13 00:16:33.306446 | orchestrator | Server: Docker Engine - Community 2026-04-13 00:16:33.306457 | orchestrator | Engine: 2026-04-13 00:16:33.306468 | orchestrator | Version: 27.5.1 2026-04-13 00:16:33.306479 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-13 00:16:33.306520 | orchestrator | Go version: go1.22.11 2026-04-13 00:16:33.306532 | orchestrator | Git commit: 4c9b3b0 2026-04-13 00:16:33.306543 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-13 00:16:33.306553 | orchestrator | OS/Arch: linux/amd64 2026-04-13 00:16:33.306564 | orchestrator | Experimental: false 2026-04-13 00:16:33.306575 | orchestrator | containerd: 2026-04-13 00:16:33.306586 | orchestrator | Version: v2.2.2 2026-04-13 00:16:33.306597 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-13 00:16:33.306671 | orchestrator | runc: 2026-04-13 00:16:33.306685 | orchestrator | Version: 1.3.4 2026-04-13 00:16:33.306701 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-13 00:16:33.306720 | orchestrator | docker-init: 2026-04-13 00:16:33.306737 | orchestrator | Version: 0.19.0 2026-04-13 00:16:33.306757 | orchestrator | GitCommit: de40ad0 2026-04-13 00:16:33.306793 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-13 00:16:33.314794 | orchestrator | + set -e 2026-04-13 00:16:33.314895 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 00:16:33.314910 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 00:16:33.314923 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 00:16:33.314934 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 00:16:33.314945 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 00:16:33.314956 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 00:16:33.314968 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 00:16:33.314979 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 00:16:33.314990 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 00:16:33.315001 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 00:16:33.315012 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 00:16:33.315023 | orchestrator | ++ export ARA=false 2026-04-13 00:16:33.315034 | orchestrator | ++ ARA=false 2026-04-13 00:16:33.315045 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 00:16:33.315056 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 00:16:33.315067 | orchestrator | ++ export TEMPEST=true 2026-04-13 00:16:33.315078 | orchestrator | ++ TEMPEST=true 2026-04-13 00:16:33.315088 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 00:16:33.315099 | orchestrator | ++ IS_ZUUL=true 2026-04-13 00:16:33.315110 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 00:16:33.315121 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 00:16:33.315132 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 00:16:33.315142 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 00:16:33.315194 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 00:16:33.315205 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 00:16:33.315216 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:33.315227 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:33.315238 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 00:16:33.315249 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 00:16:33.315260 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 00:16:33.315270 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 00:16:33.315281 | orchestrator | ++ INTERACTIVE=false 2026-04-13 00:16:33.315292 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 00:16:33.315308 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 00:16:33.315330 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 00:16:33.315342 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 00:16:33.315353 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-13 00:16:33.319948 | orchestrator | + set -e 2026-04-13 00:16:33.320013 | orchestrator | + VERSION=reef 2026-04-13 00:16:33.320199 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-13 00:16:33.325764 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-13 00:16:33.325852 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-13 00:16:33.329592 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-13 00:16:33.335973 | orchestrator | + set -e 2026-04-13 00:16:33.336017 | orchestrator | + VERSION=2024.2 2026-04-13 00:16:33.336352 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-13 00:16:33.339982 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-13 00:16:33.340037 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-13 00:16:33.345441 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-13 00:16:33.346471 | orchestrator | ++ semver latest 7.0.0 2026-04-13 00:16:33.408662 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 00:16:33.408760 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 00:16:33.408775 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-13 00:16:33.409289 | orchestrator | ++ semver latest 10.0.0-0 2026-04-13 00:16:33.464058 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 00:16:33.464357 | orchestrator | ++ semver 2024.2 2025.1 2026-04-13 00:16:33.517591 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 00:16:33.517702 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-13 00:16:33.604380 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-13 00:16:33.607020 | orchestrator | + source /opt/venv/bin/activate 2026-04-13 00:16:33.608256 | orchestrator | ++ deactivate nondestructive 2026-04-13 00:16:33.608283 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:33.608295 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:33.608307 | orchestrator | ++ hash -r 2026-04-13 00:16:33.608354 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:33.608367 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-13 00:16:33.608379 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-13 00:16:33.608393 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-13 00:16:33.608516 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-13 00:16:33.608532 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-13 00:16:33.608614 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-13 00:16:33.608630 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-13 00:16:33.608642 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:16:33.608654 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:16:33.608693 | orchestrator | ++ export PATH 2026-04-13 00:16:33.608707 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:33.608718 | orchestrator | ++ '[' -z '' ']' 2026-04-13 00:16:33.608733 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-13 00:16:33.608755 | orchestrator | ++ PS1='(venv) ' 2026-04-13 00:16:33.608767 | orchestrator | ++ export PS1 2026-04-13 00:16:33.608781 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-13 00:16:33.608793 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-13 00:16:33.608804 | orchestrator | ++ hash -r 2026-04-13 00:16:33.608838 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-13 00:16:35.008017 | orchestrator | 2026-04-13 00:16:35.009186 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-13 00:16:35.009230 | orchestrator | 2026-04-13 00:16:35.009243 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-13 00:16:35.576106 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:35.576254 | orchestrator | 2026-04-13 00:16:35.576270 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-13 00:16:36.569358 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:36.569467 | orchestrator | 2026-04-13 00:16:36.569483 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-13 00:16:36.569496 | orchestrator | 2026-04-13 00:16:36.569507 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:16:39.233222 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:39.233330 | orchestrator | 2026-04-13 00:16:39.233348 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-13 00:16:39.288779 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:39.288879 | orchestrator | 2026-04-13 00:16:39.288898 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-13 00:16:39.761216 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:39.761334 | orchestrator | 2026-04-13 00:16:39.761352 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-13 00:16:39.803368 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:16:39.803466 | orchestrator | 2026-04-13 00:16:39.803480 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-13 00:16:40.152173 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:40.152287 | orchestrator | 2026-04-13 00:16:40.152305 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-13 00:16:40.491837 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:40.491942 | orchestrator | 2026-04-13 00:16:40.491959 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-13 00:16:40.627097 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:16:40.627239 | orchestrator | 2026-04-13 00:16:40.627257 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-13 00:16:40.627270 | orchestrator | 2026-04-13 00:16:40.627281 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:16:42.357624 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:42.357754 | orchestrator | 2026-04-13 00:16:42.357783 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-13 00:16:42.488728 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-13 00:16:42.488831 | orchestrator | 2026-04-13 00:16:42.488853 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-13 00:16:42.563761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-13 00:16:42.563850 | orchestrator | 2026-04-13 00:16:42.563861 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-13 00:16:43.659280 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-13 00:16:43.659411 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-13 00:16:43.659439 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-13 00:16:43.659459 | orchestrator | 2026-04-13 00:16:43.659479 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-13 00:16:45.509732 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-13 00:16:45.509847 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-13 00:16:45.509862 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-13 00:16:45.509885 | orchestrator | 2026-04-13 00:16:45.509898 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-13 00:16:46.145747 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:16:46.145850 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:46.145867 | orchestrator | 2026-04-13 00:16:46.145880 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-13 00:16:46.808290 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:16:46.808393 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:46.808410 | orchestrator | 2026-04-13 00:16:46.808423 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-13 00:16:46.867712 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:16:46.867806 | orchestrator | 2026-04-13 00:16:46.867822 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-13 00:16:47.226565 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:47.226635 | orchestrator | 2026-04-13 00:16:47.226642 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-13 00:16:47.284208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-13 00:16:47.284307 | orchestrator | 2026-04-13 00:16:47.284321 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-13 00:16:48.393240 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:48.393345 | orchestrator | 2026-04-13 00:16:48.393362 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-13 00:16:49.269862 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:49.269966 | orchestrator | 2026-04-13 00:16:49.269997 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-13 00:17:03.414658 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:03.414757 | orchestrator | 2026-04-13 00:17:03.414789 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-13 00:17:03.473664 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:03.473757 | orchestrator | 2026-04-13 00:17:03.473770 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-13 00:17:03.473779 | orchestrator | 2026-04-13 00:17:03.473786 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:17:06.377410 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:06.377535 | orchestrator | 2026-04-13 00:17:06.377602 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-13 00:17:06.503775 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-13 00:17:06.503884 | orchestrator | 2026-04-13 00:17:06.503899 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-13 00:17:06.557029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:17:06.557168 | orchestrator | 2026-04-13 00:17:06.557192 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-13 00:17:09.223700 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:09.223804 | orchestrator | 2026-04-13 00:17:09.223821 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-13 00:17:09.279136 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:09.279238 | orchestrator | 2026-04-13 00:17:09.279257 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-13 00:17:09.444242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-13 00:17:09.444311 | orchestrator | 2026-04-13 00:17:09.444318 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-13 00:17:12.422680 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-13 00:17:12.422760 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-13 00:17:12.422767 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-13 00:17:12.422772 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-13 00:17:12.422777 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-13 00:17:12.422782 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-13 00:17:12.422786 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-13 00:17:12.422791 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-13 00:17:12.422795 | orchestrator | 2026-04-13 00:17:12.422800 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-13 00:17:13.100754 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:13.100848 | orchestrator | 2026-04-13 00:17:13.100862 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-13 00:17:13.762869 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:13.762972 | orchestrator | 2026-04-13 00:17:13.762990 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-13 00:17:13.847509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-13 00:17:13.847611 | orchestrator | 2026-04-13 00:17:13.847627 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-13 00:17:15.128886 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-13 00:17:15.128965 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-13 00:17:15.128975 | orchestrator | 2026-04-13 00:17:15.128983 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-13 00:17:15.811784 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:15.811853 | orchestrator | 2026-04-13 00:17:15.811861 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-13 00:17:15.870658 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:15.870765 | orchestrator | 2026-04-13 00:17:15.870780 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-13 00:17:15.943931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-13 00:17:15.944020 | orchestrator | 2026-04-13 00:17:15.944031 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-13 00:17:16.599994 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:16.600130 | orchestrator | 2026-04-13 00:17:16.600150 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-13 00:17:16.665545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-13 00:17:16.665683 | orchestrator | 2026-04-13 00:17:16.665698 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-13 00:17:18.050243 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:17:18.050376 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:17:18.050402 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:18.050424 | orchestrator | 2026-04-13 00:17:18.050443 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-13 00:17:18.711581 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:18.711671 | orchestrator | 2026-04-13 00:17:18.711681 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-13 00:17:18.773167 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:18.773298 | orchestrator | 2026-04-13 00:17:18.773328 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-13 00:17:18.875498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-13 00:17:18.875579 | orchestrator | 2026-04-13 00:17:18.875590 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-13 00:17:19.440883 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:19.441003 | orchestrator | 2026-04-13 00:17:19.441042 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-13 00:17:19.872040 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:19.872189 | orchestrator | 2026-04-13 00:17:19.872210 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-13 00:17:21.136871 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-13 00:17:21.136966 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-13 00:17:21.136979 | orchestrator | 2026-04-13 00:17:21.136990 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-13 00:17:21.819023 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:21.819195 | orchestrator | 2026-04-13 00:17:21.819214 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-13 00:17:22.194846 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:22.194947 | orchestrator | 2026-04-13 00:17:22.194967 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-13 00:17:22.552638 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:22.552732 | orchestrator | 2026-04-13 00:17:22.552747 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-13 00:17:22.599370 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:22.599465 | orchestrator | 2026-04-13 00:17:22.599481 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-13 00:17:22.675249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-13 00:17:22.675344 | orchestrator | 2026-04-13 00:17:22.675360 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-13 00:17:22.732549 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:22.732640 | orchestrator | 2026-04-13 00:17:22.732654 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-13 00:17:24.768131 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-13 00:17:24.768245 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-13 00:17:24.768260 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-13 00:17:24.768270 | orchestrator | 2026-04-13 00:17:24.768280 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-13 00:17:25.503728 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:25.503810 | orchestrator | 2026-04-13 00:17:25.503821 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-13 00:17:26.199951 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:26.200051 | orchestrator | 2026-04-13 00:17:26.200068 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-13 00:17:26.949132 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:26.949231 | orchestrator | 2026-04-13 00:17:26.949250 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-13 00:17:27.031756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-13 00:17:27.031865 | orchestrator | 2026-04-13 00:17:27.031907 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-13 00:17:27.087514 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:27.087600 | orchestrator | 2026-04-13 00:17:27.087616 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-13 00:17:27.809683 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-13 00:17:27.809786 | orchestrator | 2026-04-13 00:17:27.809803 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-13 00:17:27.899542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-13 00:17:27.899640 | orchestrator | 2026-04-13 00:17:27.899656 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-13 00:17:28.679864 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:28.679948 | orchestrator | 2026-04-13 00:17:28.679958 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-13 00:17:29.327910 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:29.327991 | orchestrator | 2026-04-13 00:17:29.328005 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-13 00:17:29.392974 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:29.393051 | orchestrator | 2026-04-13 00:17:29.393062 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-13 00:17:29.448804 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:29.448905 | orchestrator | 2026-04-13 00:17:29.448927 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-13 00:17:30.366919 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:30.367043 | orchestrator | 2026-04-13 00:17:30.367151 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-13 00:18:48.274234 | orchestrator | changed: [testbed-manager] 2026-04-13 00:18:48.274329 | orchestrator | 2026-04-13 00:18:48.274341 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-13 00:18:49.272115 | orchestrator | ok: [testbed-manager] 2026-04-13 00:18:49.272215 | orchestrator | 2026-04-13 00:18:49.272230 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-13 00:18:49.321556 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:18:49.321668 | orchestrator | 2026-04-13 00:18:49.321686 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-13 00:18:51.741945 | orchestrator | changed: [testbed-manager] 2026-04-13 00:18:51.742154 | orchestrator | 2026-04-13 00:18:51.742172 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-13 00:18:51.864103 | orchestrator | ok: [testbed-manager] 2026-04-13 00:18:51.864216 | orchestrator | 2026-04-13 00:18:51.864268 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-13 00:18:51.864292 | orchestrator | 2026-04-13 00:18:51.864311 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-13 00:18:51.914692 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:18:51.914798 | orchestrator | 2026-04-13 00:18:51.914817 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-13 00:19:51.962866 | orchestrator | Pausing for 60 seconds 2026-04-13 00:19:51.963127 | orchestrator | changed: [testbed-manager] 2026-04-13 00:19:51.963158 | orchestrator | 2026-04-13 00:19:51.963172 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-13 00:19:55.576995 | orchestrator | changed: [testbed-manager] 2026-04-13 00:19:55.577106 | orchestrator | 2026-04-13 00:19:55.577122 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-13 00:20:57.808550 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-13 00:20:57.808655 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-13 00:20:57.808670 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-13 00:20:57.808704 | orchestrator | changed: [testbed-manager] 2026-04-13 00:20:57.808717 | orchestrator | 2026-04-13 00:20:57.808728 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-13 00:21:03.794674 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:03.794783 | orchestrator | 2026-04-13 00:21:03.794800 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-13 00:21:03.875536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-13 00:21:03.875640 | orchestrator | 2026-04-13 00:21:03.875657 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-13 00:21:03.875670 | orchestrator | 2026-04-13 00:21:03.875682 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-13 00:21:03.932437 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:03.932551 | orchestrator | 2026-04-13 00:21:03.932577 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-13 00:21:04.001649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-13 00:21:04.001766 | orchestrator | 2026-04-13 00:21:04.001789 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-13 00:21:04.780977 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:04.781080 | orchestrator | 2026-04-13 00:21:04.781098 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-13 00:21:08.175483 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:08.175595 | orchestrator | 2026-04-13 00:21:08.175608 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-13 00:21:08.236126 | orchestrator | ok: [testbed-manager] => { 2026-04-13 00:21:08.236225 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-13 00:21:08.236241 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-13 00:21:08.236254 | orchestrator | "Checking running containers against expected versions...", 2026-04-13 00:21:08.236266 | orchestrator | "", 2026-04-13 00:21:08.236278 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-13 00:21:08.236290 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-13 00:21:08.236301 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236312 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-13 00:21:08.236323 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236334 | orchestrator | "", 2026-04-13 00:21:08.236345 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-13 00:21:08.236356 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-13 00:21:08.236367 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236378 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-13 00:21:08.236389 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236399 | orchestrator | "", 2026-04-13 00:21:08.236410 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-13 00:21:08.236421 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-13 00:21:08.236432 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236443 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-13 00:21:08.236453 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236464 | orchestrator | "", 2026-04-13 00:21:08.236475 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-13 00:21:08.236487 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-13 00:21:08.236498 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236509 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-13 00:21:08.236519 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236530 | orchestrator | "", 2026-04-13 00:21:08.236541 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-13 00:21:08.236576 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-13 00:21:08.236587 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236598 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-13 00:21:08.236609 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236619 | orchestrator | "", 2026-04-13 00:21:08.236630 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-13 00:21:08.236641 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.236651 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236665 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.236678 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236690 | orchestrator | "", 2026-04-13 00:21:08.236702 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-13 00:21:08.236714 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-13 00:21:08.236727 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236739 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-13 00:21:08.236752 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236764 | orchestrator | "", 2026-04-13 00:21:08.236778 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-13 00:21:08.236790 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-13 00:21:08.236828 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236841 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-13 00:21:08.236863 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236874 | orchestrator | "", 2026-04-13 00:21:08.236885 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-13 00:21:08.236902 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-13 00:21:08.236913 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236925 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-13 00:21:08.236936 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.236947 | orchestrator | "", 2026-04-13 00:21:08.236957 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-13 00:21:08.236968 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-13 00:21:08.236979 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.236990 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-13 00:21:08.237000 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.237011 | orchestrator | "", 2026-04-13 00:21:08.237022 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-13 00:21:08.237032 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237043 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.237054 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237065 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.237075 | orchestrator | "", 2026-04-13 00:21:08.237086 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-13 00:21:08.237097 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237108 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.237119 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237129 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.237140 | orchestrator | "", 2026-04-13 00:21:08.237150 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-13 00:21:08.237161 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237172 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.237182 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237193 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.237203 | orchestrator | "", 2026-04-13 00:21:08.237214 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-13 00:21:08.237225 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237235 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.237254 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237265 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.237276 | orchestrator | "", 2026-04-13 00:21:08.237286 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-13 00:21:08.237315 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237326 | orchestrator | " Enabled: true", 2026-04-13 00:21:08.237337 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-13 00:21:08.237348 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:08.237359 | orchestrator | "", 2026-04-13 00:21:08.237369 | orchestrator | "=== Summary ===", 2026-04-13 00:21:08.237380 | orchestrator | "Errors (version mismatches): 0", 2026-04-13 00:21:08.237391 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-13 00:21:08.237402 | orchestrator | "", 2026-04-13 00:21:08.237413 | orchestrator | "✅ All running containers match expected versions!" 2026-04-13 00:21:08.237424 | orchestrator | ] 2026-04-13 00:21:08.237435 | orchestrator | } 2026-04-13 00:21:08.237446 | orchestrator | 2026-04-13 00:21:08.237457 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-13 00:21:08.303111 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:08.303161 | orchestrator | 2026-04-13 00:21:08.303175 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:21:08.303189 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-13 00:21:08.303201 | orchestrator | 2026-04-13 00:21:08.413629 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-13 00:21:08.413747 | orchestrator | + deactivate 2026-04-13 00:21:08.413774 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-13 00:21:08.413878 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:21:08.413905 | orchestrator | + export PATH 2026-04-13 00:21:08.413925 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-13 00:21:08.413946 | orchestrator | + '[' -n '' ']' 2026-04-13 00:21:08.413966 | orchestrator | + hash -r 2026-04-13 00:21:08.413984 | orchestrator | + '[' -n '' ']' 2026-04-13 00:21:08.414000 | orchestrator | + unset VIRTUAL_ENV 2026-04-13 00:21:08.414088 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-13 00:21:08.414113 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-13 00:21:08.414134 | orchestrator | + unset -f deactivate 2026-04-13 00:21:08.414155 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-13 00:21:08.421541 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-13 00:21:08.421624 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-13 00:21:08.421637 | orchestrator | + local max_attempts=60 2026-04-13 00:21:08.421649 | orchestrator | + local name=ceph-ansible 2026-04-13 00:21:08.421660 | orchestrator | + local attempt_num=1 2026-04-13 00:21:08.422334 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:21:08.460404 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:21:08.460504 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-13 00:21:08.460520 | orchestrator | + local max_attempts=60 2026-04-13 00:21:08.460533 | orchestrator | + local name=kolla-ansible 2026-04-13 00:21:08.460545 | orchestrator | + local attempt_num=1 2026-04-13 00:21:08.461121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-13 00:21:08.504267 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:21:08.504352 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-13 00:21:08.504391 | orchestrator | + local max_attempts=60 2026-04-13 00:21:08.504404 | orchestrator | + local name=osism-ansible 2026-04-13 00:21:08.504415 | orchestrator | + local attempt_num=1 2026-04-13 00:21:08.505134 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-13 00:21:08.544017 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:21:08.544126 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-13 00:21:08.544151 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-13 00:21:09.297272 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-13 00:21:09.489283 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-13 00:21:09.489386 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:09.489395 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:09.489402 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-13 00:21:09.489411 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-04-13 00:21:09.489418 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:09.489424 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:09.489430 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:09.489450 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:09.489457 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-13 00:21:09.489463 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:09.489469 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-13 00:21:09.489475 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:09.489481 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-13 00:21:09.489488 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:09.489494 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:09.496405 | orchestrator | ++ semver latest 7.0.0 2026-04-13 00:21:09.540089 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 00:21:09.540168 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 00:21:09.540184 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-13 00:21:09.544935 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-13 00:21:22.105408 | orchestrator | 2026-04-13 00:21:22 | INFO  | Prepare task for execution of resolvconf. 2026-04-13 00:21:22.316718 | orchestrator | 2026-04-13 00:21:22 | INFO  | Task 30509c7b-78ca-4347-95a5-12d6ac160bcf (resolvconf) was prepared for execution. 2026-04-13 00:21:22.316887 | orchestrator | 2026-04-13 00:21:22 | INFO  | It takes a moment until task 30509c7b-78ca-4347-95a5-12d6ac160bcf (resolvconf) has been started and output is visible here. 2026-04-13 00:21:36.411651 | orchestrator | 2026-04-13 00:21:36.411800 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-13 00:21:36.411818 | orchestrator | 2026-04-13 00:21:36.411829 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:21:36.411840 | orchestrator | Monday 13 April 2026 00:21:25 +0000 (0:00:00.182) 0:00:00.182 ********** 2026-04-13 00:21:36.411850 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:36.411860 | orchestrator | 2026-04-13 00:21:36.411871 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-13 00:21:36.411881 | orchestrator | Monday 13 April 2026 00:21:29 +0000 (0:00:04.046) 0:00:04.229 ********** 2026-04-13 00:21:36.411891 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:36.411903 | orchestrator | 2026-04-13 00:21:36.411920 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-13 00:21:36.411936 | orchestrator | Monday 13 April 2026 00:21:29 +0000 (0:00:00.070) 0:00:04.300 ********** 2026-04-13 00:21:36.411960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-13 00:21:36.411979 | orchestrator | 2026-04-13 00:21:36.411994 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-13 00:21:36.412011 | orchestrator | Monday 13 April 2026 00:21:29 +0000 (0:00:00.077) 0:00:04.378 ********** 2026-04-13 00:21:36.412027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:21:36.412042 | orchestrator | 2026-04-13 00:21:36.412072 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-13 00:21:36.412088 | orchestrator | Monday 13 April 2026 00:21:29 +0000 (0:00:00.072) 0:00:04.451 ********** 2026-04-13 00:21:36.412102 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:36.412117 | orchestrator | 2026-04-13 00:21:36.412134 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-13 00:21:36.412152 | orchestrator | Monday 13 April 2026 00:21:31 +0000 (0:00:01.293) 0:00:05.744 ********** 2026-04-13 00:21:36.412168 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:36.412184 | orchestrator | 2026-04-13 00:21:36.412199 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-13 00:21:36.412217 | orchestrator | Monday 13 April 2026 00:21:31 +0000 (0:00:00.068) 0:00:05.813 ********** 2026-04-13 00:21:36.412234 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:36.412252 | orchestrator | 2026-04-13 00:21:36.412269 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-13 00:21:36.412286 | orchestrator | Monday 13 April 2026 00:21:31 +0000 (0:00:00.597) 0:00:06.410 ********** 2026-04-13 00:21:36.412303 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:36.412319 | orchestrator | 2026-04-13 00:21:36.412336 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-13 00:21:36.412356 | orchestrator | Monday 13 April 2026 00:21:31 +0000 (0:00:00.087) 0:00:06.497 ********** 2026-04-13 00:21:36.412374 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:36.412391 | orchestrator | 2026-04-13 00:21:36.412409 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-13 00:21:36.412426 | orchestrator | Monday 13 April 2026 00:21:32 +0000 (0:00:00.627) 0:00:07.125 ********** 2026-04-13 00:21:36.412443 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:36.412460 | orchestrator | 2026-04-13 00:21:36.412478 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-13 00:21:36.412494 | orchestrator | Monday 13 April 2026 00:21:33 +0000 (0:00:01.230) 0:00:08.356 ********** 2026-04-13 00:21:36.412511 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:36.412557 | orchestrator | 2026-04-13 00:21:36.412575 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-13 00:21:36.412593 | orchestrator | Monday 13 April 2026 00:21:34 +0000 (0:00:01.056) 0:00:09.412 ********** 2026-04-13 00:21:36.412610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-13 00:21:36.412627 | orchestrator | 2026-04-13 00:21:36.412643 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-13 00:21:36.412660 | orchestrator | Monday 13 April 2026 00:21:34 +0000 (0:00:00.089) 0:00:09.501 ********** 2026-04-13 00:21:36.412676 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:36.412692 | orchestrator | 2026-04-13 00:21:36.412709 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:21:36.412727 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:21:36.412744 | orchestrator | 2026-04-13 00:21:36.412794 | orchestrator | 2026-04-13 00:21:36.412811 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:21:36.412828 | orchestrator | Monday 13 April 2026 00:21:36 +0000 (0:00:01.223) 0:00:10.725 ********** 2026-04-13 00:21:36.412844 | orchestrator | =============================================================================== 2026-04-13 00:21:36.412861 | orchestrator | Gathering Facts --------------------------------------------------------- 4.05s 2026-04-13 00:21:36.412877 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.29s 2026-04-13 00:21:36.412893 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.23s 2026-04-13 00:21:36.412909 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2026-04-13 00:21:36.412925 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.06s 2026-04-13 00:21:36.412942 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2026-04-13 00:21:36.412981 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.60s 2026-04-13 00:21:36.412998 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-04-13 00:21:36.413015 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-04-13 00:21:36.413031 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-13 00:21:36.413047 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-04-13 00:21:36.413062 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-04-13 00:21:36.413078 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-04-13 00:21:36.628903 | orchestrator | + osism apply sshconfig 2026-04-13 00:21:48.070441 | orchestrator | 2026-04-13 00:21:48 | INFO  | Prepare task for execution of sshconfig. 2026-04-13 00:21:48.148120 | orchestrator | 2026-04-13 00:21:48 | INFO  | Task faa3afa2-2bd8-4a5a-af46-a21b16d67747 (sshconfig) was prepared for execution. 2026-04-13 00:21:48.148214 | orchestrator | 2026-04-13 00:21:48 | INFO  | It takes a moment until task faa3afa2-2bd8-4a5a-af46-a21b16d67747 (sshconfig) has been started and output is visible here. 2026-04-13 00:22:00.019404 | orchestrator | 2026-04-13 00:22:00.019556 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-13 00:22:00.019575 | orchestrator | 2026-04-13 00:22:00.019587 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-13 00:22:00.019599 | orchestrator | Monday 13 April 2026 00:21:51 +0000 (0:00:00.216) 0:00:00.216 ********** 2026-04-13 00:22:00.019611 | orchestrator | ok: [testbed-manager] 2026-04-13 00:22:00.019623 | orchestrator | 2026-04-13 00:22:00.019634 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-13 00:22:00.019673 | orchestrator | Monday 13 April 2026 00:21:52 +0000 (0:00:00.964) 0:00:01.181 ********** 2026-04-13 00:22:00.019685 | orchestrator | changed: [testbed-manager] 2026-04-13 00:22:00.019697 | orchestrator | 2026-04-13 00:22:00.019708 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-13 00:22:00.019758 | orchestrator | Monday 13 April 2026 00:21:53 +0000 (0:00:00.581) 0:00:01.763 ********** 2026-04-13 00:22:00.019771 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-13 00:22:00.019782 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-13 00:22:00.019794 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-13 00:22:00.019804 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-13 00:22:00.019815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-13 00:22:00.019826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-13 00:22:00.019837 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-13 00:22:00.019848 | orchestrator | 2026-04-13 00:22:00.019859 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-13 00:22:00.019870 | orchestrator | Monday 13 April 2026 00:21:59 +0000 (0:00:06.064) 0:00:07.827 ********** 2026-04-13 00:22:00.019881 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:22:00.019892 | orchestrator | 2026-04-13 00:22:00.019903 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-13 00:22:00.019914 | orchestrator | Monday 13 April 2026 00:21:59 +0000 (0:00:00.113) 0:00:07.941 ********** 2026-04-13 00:22:00.019925 | orchestrator | changed: [testbed-manager] 2026-04-13 00:22:00.019936 | orchestrator | 2026-04-13 00:22:00.019950 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:22:00.019965 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:22:00.019978 | orchestrator | 2026-04-13 00:22:00.019991 | orchestrator | 2026-04-13 00:22:00.020005 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:22:00.020018 | orchestrator | Monday 13 April 2026 00:21:59 +0000 (0:00:00.591) 0:00:08.532 ********** 2026-04-13 00:22:00.020031 | orchestrator | =============================================================================== 2026-04-13 00:22:00.020045 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.06s 2026-04-13 00:22:00.020058 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.96s 2026-04-13 00:22:00.020071 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2026-04-13 00:22:00.020084 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.58s 2026-04-13 00:22:00.020098 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-04-13 00:22:00.202461 | orchestrator | + osism apply known-hosts 2026-04-13 00:22:11.583228 | orchestrator | 2026-04-13 00:22:11 | INFO  | Prepare task for execution of known-hosts. 2026-04-13 00:22:11.665423 | orchestrator | 2026-04-13 00:22:11 | INFO  | Task 3d7f6d23-348c-469c-ab87-3dde8b2dff08 (known-hosts) was prepared for execution. 2026-04-13 00:22:11.665517 | orchestrator | 2026-04-13 00:22:11 | INFO  | It takes a moment until task 3d7f6d23-348c-469c-ab87-3dde8b2dff08 (known-hosts) has been started and output is visible here. 2026-04-13 00:22:27.824254 | orchestrator | 2026-04-13 00:22:27.824384 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-13 00:22:27.824402 | orchestrator | 2026-04-13 00:22:27.824414 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-13 00:22:27.824429 | orchestrator | Monday 13 April 2026 00:22:14 +0000 (0:00:00.202) 0:00:00.202 ********** 2026-04-13 00:22:27.824448 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-13 00:22:27.824466 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-13 00:22:27.824511 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-13 00:22:27.824530 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-13 00:22:27.824550 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-13 00:22:27.824569 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-13 00:22:27.824587 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-13 00:22:27.824605 | orchestrator | 2026-04-13 00:22:27.824623 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-13 00:22:27.824644 | orchestrator | Monday 13 April 2026 00:22:21 +0000 (0:00:06.545) 0:00:06.748 ********** 2026-04-13 00:22:27.824677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-13 00:22:27.824810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-13 00:22:27.824836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-13 00:22:27.824856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-13 00:22:27.824875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-13 00:22:27.824892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-13 00:22:27.824910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-13 00:22:27.824927 | orchestrator | 2026-04-13 00:22:27.824945 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:27.824961 | orchestrator | Monday 13 April 2026 00:22:21 +0000 (0:00:00.184) 0:00:06.932 ********** 2026-04-13 00:22:27.824977 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINm66KpTzx+z8DLSpN1i757N8ydA5cyM7ARuwqUBC5nC) 2026-04-13 00:22:27.825000 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfAZQeUn9B7UY+Pg+Sl2EVt60vxDl4f+TyGQOm1OXkiAq0EtUSPoOZ5aK/VLUGTSKeXRWCljwJnmNYBQqSvYhso51hj8rJt+Uf599pNW/iIRRxqhpUJNu4Fy6hvw5P90BvhRfEJrjsHRcdJDnL9f7FyGHmvO0mMcg8+HAOqlLXkUZpVY/rYuA0RXKAae+XsiLvvxvNTefM+74knUOf5OzRZELs1pjqfREL1QEday++aJYy8XF/mD6aUTsECUWLPVjYqG+gSSn/c4/ASPcivaWar5sBS6DAVUCeXUaugynt2VXwWQaiHja23A5U0EgVuBOuyjHLb+a3ILaZnDwvueQMmqYflVNg121U7xhaAWNySYm+HFv5sdrLOZb2imD0scjswaRhNogTZUMHkO57uylV6wHdixWzd/6VoHMdZPM+lgUwa3DcOHVvlTp1VcYuGijZiwBfqoR/RUQ/6NLG7npMYV/vyjXqtOtgyvQh4NuhRuwgufsuxJN3MFbuj5OOmEM=) 2026-04-13 00:22:27.825030 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOO81NlwiIFOw+dsgQRxvjWb4rSbCmh31sVvNsYkEUEJNT9Nk+/ev7TGIImzcPOnU/qKYwxe4D76YZ8lGyWMwiI=) 2026-04-13 00:22:27.825049 | orchestrator | 2026-04-13 00:22:27.825065 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:27.825085 | orchestrator | Monday 13 April 2026 00:22:22 +0000 (0:00:01.316) 0:00:08.249 ********** 2026-04-13 00:22:27.825130 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY1McA36y5b7NftVEOIZqq3bJEIDRHBmBJQeglvx7WgT2Ly+8IWyxyzA8prlpFxiw2aLKKF9K70Wmhefs1egA3PqcCqzY2M2twwKoKQf4FmAs27mrLPLqUro9ywzWvruoHxM/NeGkKRllq+lVkvlgxby1O1XoY+X7GqcPM4xZ8m4Nq3G/xPfQZDYuTXZ6/iVJRjLMwSGNuJXITCq/5Gp4zS4z+3SekkhuRu3Qp8OnoelC4gzTjbZNjtVa6Ld61MiDhuBub5JdKI6OoRy5yjLL2BloHBZGK5CKKLUp9q2sxwmrbX9jCvRLZVcHVj4kys8hGER31is2EewR/MdiwLlEcgWe/1sJRdGy97ETDII/+I51SFdpAbZaeg6HELXGOk+xFSxDturG17aodHqghKCAQJRHVYyrF8uPX1Ve/GuyQb+LfFnw+SVlXQTlD/WoO+uX0tJvp+rZkWw9GQX6pRBDMQxyqgY+yqJY3w3FqJSigWc6S9+wKr6VhkEM9rYFQ7Y0=) 2026-04-13 00:22:27.825167 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnoJtt+JInBvj6woCf9zIvtA3skSXtUvQL/C+NQx550Y5iRLqQt7dYFMtVi0IUfYfwKxEE9vYWyMpY7igXLzmA=) 2026-04-13 00:22:27.825186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/9YgxGrG/oyjMMUNFpz0dKI9JtkfV7YqnDdc+PowMX) 2026-04-13 00:22:27.825206 | orchestrator | 2026-04-13 00:22:27.825223 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:27.825241 | orchestrator | Monday 13 April 2026 00:22:24 +0000 (0:00:01.132) 0:00:09.382 ********** 2026-04-13 00:22:27.825253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDffuvhjdPLPthuz9wX32cAdQ66r4CkGSPwYez3AMfyFij2/T126Y2pEWFhj2D5woCgQatoXcfP+07ROhazaudqmxHC5eIlolWvJP1oU4SAgC4xMOXtLz859MuncEY8itE2+uyMCaf2JwoD2vYvnFhQ/GI2B2a8p/5qoONc6qBjbvFC6NS6AlRGACZ6XIj017uiFy7IefKIn1f1DVvXLY5zmx0qeg0udBPM4GMKAdK8A6/rPktqWTNc6Y8cLfIcwV2QG/SaTUgz04M5FJOsYEGIBJVdc1y3cjheVKCBYfi7RBwtRxGdxungK4Bk2O4q3l/QAxuyJIotY8HLddG2jHTMXwZiuC5UJKoMj1BgV4Jf1qrWSODo24ioMu+HtmhVhONaZOv75d0EkOxaJ7j4qpfF6gbDmyUxYq0o9eVfDjIu1i/u3hKmuIn3jkGMinnTUX+K7wfEvmDnRMzWXHh98tezuIkJwvMCYsQxs39DwX2uzL1WE1JlisArhyYSR/eoxxs=) 2026-04-13 00:22:27.825265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtsrJXFWIimfSojOGYsJASHwJF7tPi5HBQf9tnltKJW2UDBuQCxcFG+bI8pzyUXFwFzrCRX6e+1B0zNR+SoK/c=) 2026-04-13 00:22:27.825343 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGLyCwYO/yEIp3v6HkkGYEIhMX/JrMmj+94Jq2FTV4mp) 2026-04-13 00:22:27.825354 | orchestrator | 2026-04-13 00:22:27.825365 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:27.825384 | orchestrator | Monday 13 April 2026 00:22:25 +0000 (0:00:01.115) 0:00:10.497 ********** 2026-04-13 00:22:27.825401 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCG/A2roQvD8NFGBe7IXZHuxRCSrfHPp+a/24hMX1uvlCoq/rkuGx4RLVt+feWh01Gn2n/3yWoMwdf5rA9fF7JgxcA9EOgaDVcGrNDIl3KQYtqHgcvhSqugaU2IhBqzoi9I85Pm/K2+282rMI0UO0JA4I4sXXevqeFGgFJ7D4bPg/kR83nzjkfdWOmo/ma5+wz1kcvLS+SE14WpbdFWMmj5Z6030L2/5s4i0fNwdmddbHcmL0KllcA7zRNN7w1yG65fwsXD7/nyLsPqX7lbJXpkn1nQtzYLrQ2bNdeKVtDcDcvtrpmVADB122S+bzlpggy2N0ugiYcGcrCC/9hlMlmr047QiihGh8P1vKta1+dSaZIfYVb/o3s8VA1TqqdZihjkzd2qkapnVKsPXPwsMDmP0gOIXFyNn6r5UZLtoACeaIj1R6VLxF5F5Df8G7dXPl926iq5KGlR64L5XGTqNW40hgd5XEZm06B1o4XVtnHW0U8herxgGVyEo2zT+aHhNb0=) 2026-04-13 00:22:27.825430 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFyXCE9zD8+k24WXzjEpqw7fxV6ZIXOgI34wRJjremkCR3M5/J4Gka+Q+NeQmQknZqOLv5dvf8iPrW4EXSEd8OQ=) 2026-04-13 00:22:27.825449 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPv9PiiKfNa15i4hmZ29vUrgefOHPAbe1iKAemuiAzr1) 2026-04-13 00:22:27.825465 | orchestrator | 2026-04-13 00:22:27.825481 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:27.825498 | orchestrator | Monday 13 April 2026 00:22:26 +0000 (0:00:01.138) 0:00:11.636 ********** 2026-04-13 00:22:27.825515 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPX120zNQqdXlfIqfILBEWgMBqyzBtiONbyjjkdZD+yh5EQ1p6rCGRqSw5p1uMwBrG4N+qJWMejcP5MEmC/kJM8=) 2026-04-13 00:22:27.825532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOuK4hbYCJTl1AHxJQDo6uP+N1B+3XiA6cbl9tefZPET) 2026-04-13 00:22:27.825566 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs+hDd8UUtCwZ84nimMjcl1VOzkFFqNDjfLJn/fMtOnPPFLaB5ZlHPqZ1qaZNUNckv4nisavz89Httj6Wplu68D5etR3dzQG20bAMbil1LtIqeJSNMQQb608c7nqbV2ZGFklrIkPwyMYC+UesE0FimOZeKUfVWmrMwz0/DW4lXPlDhW3ZsWXkLmWDoKwsuY8G5soixEGt3LKT8JFxJ+Y9XDnq+tEMqQO+qf0hDSDS1VsWhV1MXuy5Fd0PktkW/Es2hoyfytx5v7Fj/x5wwTY8a72ReDGP3U8jiy1KktcjKGfE4ysMlFKYf0rIdhNQ40jdApi/DxVdlG4JCnFW+8Li5g/AbZnDpmHCxga8JGv46/tkXHQ7PiZldI/H99pieQEkSr45lgMHB8SRUFJWlVAlaDPNN7hu9o/Sx3h3uzi+UdSaDvkL0nAAkH0rEIwEdFRbsz/Pm6vxRYYJ8vjYwcGtBVRzaPhzp9k3EwIDSJ9xlXwS58nFQfLUNVUWANBWRICc=) 2026-04-13 00:22:27.825585 | orchestrator | 2026-04-13 00:22:27.825603 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:27.825618 | orchestrator | Monday 13 April 2026 00:22:27 +0000 (0:00:01.103) 0:00:12.739 ********** 2026-04-13 00:22:27.825640 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9RqbbuBKh+24HTlgNOzNN+FyRD6AX61mo5TQDT+zxpsWb6KX2sNJf0NeZQEQlwUbt8G43eYts9Go23NIcb8+g=) 2026-04-13 00:22:39.048120 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNExD7/2j4ZW1+B9hz7XY4Kgwnw2H8/Umdzj8bVsEJ4BkoAydLnIBb+xAai45sVu8sDjnHz4SLL/SA4N9/Pilijpsk+gho7eyV0UECR3J6d2JBnuV7W3WOBjygV/79Q+LqcP7FIJGFHKB6Cu0LjjxSoOuVNr/NFLQFOIjVBJevaHwiPiRqequrxaJBlLN96wwxCGnVbmOfwZvs0y3hWgFtwekyWKDhPDEXCUcT8HYzV4nfgBwONJuy/XbwMc6qi9JkSydu/J7txQYsfhAL4qhYNHepGqVJ/2Fs5O4F8wK2B6H1OHct1Q8JtZ8VbHG5Z0r+5/K74jD4iI8T14V+grLvaBIv7OGkuufkOsZAiIGg6cr1Pa207ZRfDFgdZK4t/+ANGkHiIn+7clQ3UlWsTO0BB6yAfjCozZq9WNoPUSvNegAEmVDm2qzvH6qQyaF8VMI73dz1wNKI/yjthtKyt9XA5K92t9LJfmB6VMl/KAxIf5tI29GzZzaA/iqP1XF5cT0=) 2026-04-13 00:22:39.048272 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFwHu4i98AAwTo2y1XAiQVB1p+O3xBX57OGHhz9O5/rr) 2026-04-13 00:22:39.048303 | orchestrator | 2026-04-13 00:22:39.048324 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:39.048343 | orchestrator | Monday 13 April 2026 00:22:28 +0000 (0:00:01.096) 0:00:13.836 ********** 2026-04-13 00:22:39.048363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCK4Hd3AlAc20DMfZDvufflXXwtcv2mG3hmMRf5PK5dBXl9qPjdA+6ivf+HZiVj7nC9b8bUcg2S8Fyaz7YP9PFitvPdcPK4R+ALjMFE7O2iC3exqtx11bqf9hToVtpAwKpAQr2Fps5xW05tydrehKRrakShAxyrMoJvBdEsm6061fqsaEd9BN93QqwcWIRDX/S1De8GDQv3Og9VGqZqAA8QK4Sb0dgSN8YRYowGqtqrrbr6DsBs83PlZKmnaIJgER+SMqulz4BePGLfYmU/2ooSXLN0XRCJAnwhzwH/VyecTctMwJqmRIp+WwJV5SZz3cMlQhXu/+pJ8a57g0JnyGkGm+B5b+kQW62MyG4YACoMIPp6RJrxM8SHx1s+cfay1QvPLohR3WqA/4/jG9yEfeIi1h384tbjlK4JvG2IBSHadfUjEJtrPoZ2zsT9EQlvKjdIyMA7NgzHPFUwcN3IkS0V5C/yE5YC1y9Iec1iA6Fb19mrMUDu9RJxjmlnYrq9Evk=) 2026-04-13 00:22:39.048384 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLXwZuXHJSZ6iDf7r4MvzhpeEdqCIMEHPoVfN9FN5RioyNhGRXI9NHcC6gJWpK7TvJVst1Ec3FytG5MK0GtmMJ4=) 2026-04-13 00:22:39.048405 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPgVlOZ80dVGekLT/wYpBKa0qHgGVWqu+jGlhcy4BHPF) 2026-04-13 00:22:39.048422 | orchestrator | 2026-04-13 00:22:39.048440 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-13 00:22:39.048458 | orchestrator | Monday 13 April 2026 00:22:29 +0000 (0:00:01.138) 0:00:14.974 ********** 2026-04-13 00:22:39.048475 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-13 00:22:39.048493 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-13 00:22:39.048512 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-13 00:22:39.048530 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-13 00:22:39.048548 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-13 00:22:39.048591 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-13 00:22:39.048639 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-13 00:22:39.048658 | orchestrator | 2026-04-13 00:22:39.048715 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-13 00:22:39.048737 | orchestrator | Monday 13 April 2026 00:22:35 +0000 (0:00:05.404) 0:00:20.379 ********** 2026-04-13 00:22:39.048757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-13 00:22:39.048778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-13 00:22:39.048796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-13 00:22:39.048813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-13 00:22:39.048832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-13 00:22:39.048849 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-13 00:22:39.048868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-13 00:22:39.048887 | orchestrator | 2026-04-13 00:22:39.048933 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:39.048953 | orchestrator | Monday 13 April 2026 00:22:35 +0000 (0:00:00.211) 0:00:20.591 ********** 2026-04-13 00:22:39.048973 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINm66KpTzx+z8DLSpN1i757N8ydA5cyM7ARuwqUBC5nC) 2026-04-13 00:22:39.048996 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfAZQeUn9B7UY+Pg+Sl2EVt60vxDl4f+TyGQOm1OXkiAq0EtUSPoOZ5aK/VLUGTSKeXRWCljwJnmNYBQqSvYhso51hj8rJt+Uf599pNW/iIRRxqhpUJNu4Fy6hvw5P90BvhRfEJrjsHRcdJDnL9f7FyGHmvO0mMcg8+HAOqlLXkUZpVY/rYuA0RXKAae+XsiLvvxvNTefM+74knUOf5OzRZELs1pjqfREL1QEday++aJYy8XF/mD6aUTsECUWLPVjYqG+gSSn/c4/ASPcivaWar5sBS6DAVUCeXUaugynt2VXwWQaiHja23A5U0EgVuBOuyjHLb+a3ILaZnDwvueQMmqYflVNg121U7xhaAWNySYm+HFv5sdrLOZb2imD0scjswaRhNogTZUMHkO57uylV6wHdixWzd/6VoHMdZPM+lgUwa3DcOHVvlTp1VcYuGijZiwBfqoR/RUQ/6NLG7npMYV/vyjXqtOtgyvQh4NuhRuwgufsuxJN3MFbuj5OOmEM=) 2026-04-13 00:22:39.049016 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOO81NlwiIFOw+dsgQRxvjWb4rSbCmh31sVvNsYkEUEJNT9Nk+/ev7TGIImzcPOnU/qKYwxe4D76YZ8lGyWMwiI=) 2026-04-13 00:22:39.049036 | orchestrator | 2026-04-13 00:22:39.049054 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:39.049074 | orchestrator | Monday 13 April 2026 00:22:36 +0000 (0:00:01.143) 0:00:21.734 ********** 2026-04-13 00:22:39.049094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY1McA36y5b7NftVEOIZqq3bJEIDRHBmBJQeglvx7WgT2Ly+8IWyxyzA8prlpFxiw2aLKKF9K70Wmhefs1egA3PqcCqzY2M2twwKoKQf4FmAs27mrLPLqUro9ywzWvruoHxM/NeGkKRllq+lVkvlgxby1O1XoY+X7GqcPM4xZ8m4Nq3G/xPfQZDYuTXZ6/iVJRjLMwSGNuJXITCq/5Gp4zS4z+3SekkhuRu3Qp8OnoelC4gzTjbZNjtVa6Ld61MiDhuBub5JdKI6OoRy5yjLL2BloHBZGK5CKKLUp9q2sxwmrbX9jCvRLZVcHVj4kys8hGER31is2EewR/MdiwLlEcgWe/1sJRdGy97ETDII/+I51SFdpAbZaeg6HELXGOk+xFSxDturG17aodHqghKCAQJRHVYyrF8uPX1Ve/GuyQb+LfFnw+SVlXQTlD/WoO+uX0tJvp+rZkWw9GQX6pRBDMQxyqgY+yqJY3w3FqJSigWc6S9+wKr6VhkEM9rYFQ7Y0=) 2026-04-13 00:22:39.049135 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnoJtt+JInBvj6woCf9zIvtA3skSXtUvQL/C+NQx550Y5iRLqQt7dYFMtVi0IUfYfwKxEE9vYWyMpY7igXLzmA=) 2026-04-13 00:22:39.049156 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/9YgxGrG/oyjMMUNFpz0dKI9JtkfV7YqnDdc+PowMX) 2026-04-13 00:22:39.049174 | orchestrator | 2026-04-13 00:22:39.049192 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:39.049211 | orchestrator | Monday 13 April 2026 00:22:37 +0000 (0:00:01.083) 0:00:22.818 ********** 2026-04-13 00:22:39.049230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGLyCwYO/yEIp3v6HkkGYEIhMX/JrMmj+94Jq2FTV4mp) 2026-04-13 00:22:39.049249 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDffuvhjdPLPthuz9wX32cAdQ66r4CkGSPwYez3AMfyFij2/T126Y2pEWFhj2D5woCgQatoXcfP+07ROhazaudqmxHC5eIlolWvJP1oU4SAgC4xMOXtLz859MuncEY8itE2+uyMCaf2JwoD2vYvnFhQ/GI2B2a8p/5qoONc6qBjbvFC6NS6AlRGACZ6XIj017uiFy7IefKIn1f1DVvXLY5zmx0qeg0udBPM4GMKAdK8A6/rPktqWTNc6Y8cLfIcwV2QG/SaTUgz04M5FJOsYEGIBJVdc1y3cjheVKCBYfi7RBwtRxGdxungK4Bk2O4q3l/QAxuyJIotY8HLddG2jHTMXwZiuC5UJKoMj1BgV4Jf1qrWSODo24ioMu+HtmhVhONaZOv75d0EkOxaJ7j4qpfF6gbDmyUxYq0o9eVfDjIu1i/u3hKmuIn3jkGMinnTUX+K7wfEvmDnRMzWXHh98tezuIkJwvMCYsQxs39DwX2uzL1WE1JlisArhyYSR/eoxxs=) 2026-04-13 00:22:39.049268 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtsrJXFWIimfSojOGYsJASHwJF7tPi5HBQf9tnltKJW2UDBuQCxcFG+bI8pzyUXFwFzrCRX6e+1B0zNR+SoK/c=) 2026-04-13 00:22:39.049287 | orchestrator | 2026-04-13 00:22:39.049305 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:39.049325 | orchestrator | Monday 13 April 2026 00:22:38 +0000 (0:00:01.121) 0:00:23.939 ********** 2026-04-13 00:22:39.049378 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCG/A2roQvD8NFGBe7IXZHuxRCSrfHPp+a/24hMX1uvlCoq/rkuGx4RLVt+feWh01Gn2n/3yWoMwdf5rA9fF7JgxcA9EOgaDVcGrNDIl3KQYtqHgcvhSqugaU2IhBqzoi9I85Pm/K2+282rMI0UO0JA4I4sXXevqeFGgFJ7D4bPg/kR83nzjkfdWOmo/ma5+wz1kcvLS+SE14WpbdFWMmj5Z6030L2/5s4i0fNwdmddbHcmL0KllcA7zRNN7w1yG65fwsXD7/nyLsPqX7lbJXpkn1nQtzYLrQ2bNdeKVtDcDcvtrpmVADB122S+bzlpggy2N0ugiYcGcrCC/9hlMlmr047QiihGh8P1vKta1+dSaZIfYVb/o3s8VA1TqqdZihjkzd2qkapnVKsPXPwsMDmP0gOIXFyNn6r5UZLtoACeaIj1R6VLxF5F5Df8G7dXPl926iq5KGlR64L5XGTqNW40hgd5XEZm06B1o4XVtnHW0U8herxgGVyEo2zT+aHhNb0=) 2026-04-13 00:22:45.140320 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFyXCE9zD8+k24WXzjEpqw7fxV6ZIXOgI34wRJjremkCR3M5/J4Gka+Q+NeQmQknZqOLv5dvf8iPrW4EXSEd8OQ=) 2026-04-13 00:22:45.140448 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPv9PiiKfNa15i4hmZ29vUrgefOHPAbe1iKAemuiAzr1) 2026-04-13 00:22:45.140474 | orchestrator | 2026-04-13 00:22:45.140493 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:45.140511 | orchestrator | Monday 13 April 2026 00:22:39 +0000 (0:00:01.139) 0:00:25.078 ********** 2026-04-13 00:22:45.140528 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPX120zNQqdXlfIqfILBEWgMBqyzBtiONbyjjkdZD+yh5EQ1p6rCGRqSw5p1uMwBrG4N+qJWMejcP5MEmC/kJM8=) 2026-04-13 00:22:45.140568 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs+hDd8UUtCwZ84nimMjcl1VOzkFFqNDjfLJn/fMtOnPPFLaB5ZlHPqZ1qaZNUNckv4nisavz89Httj6Wplu68D5etR3dzQG20bAMbil1LtIqeJSNMQQb608c7nqbV2ZGFklrIkPwyMYC+UesE0FimOZeKUfVWmrMwz0/DW4lXPlDhW3ZsWXkLmWDoKwsuY8G5soixEGt3LKT8JFxJ+Y9XDnq+tEMqQO+qf0hDSDS1VsWhV1MXuy5Fd0PktkW/Es2hoyfytx5v7Fj/x5wwTY8a72ReDGP3U8jiy1KktcjKGfE4ysMlFKYf0rIdhNQ40jdApi/DxVdlG4JCnFW+8Li5g/AbZnDpmHCxga8JGv46/tkXHQ7PiZldI/H99pieQEkSr45lgMHB8SRUFJWlVAlaDPNN7hu9o/Sx3h3uzi+UdSaDvkL0nAAkH0rEIwEdFRbsz/Pm6vxRYYJ8vjYwcGtBVRzaPhzp9k3EwIDSJ9xlXwS58nFQfLUNVUWANBWRICc=) 2026-04-13 00:22:45.140621 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOuK4hbYCJTl1AHxJQDo6uP+N1B+3XiA6cbl9tefZPET) 2026-04-13 00:22:45.140640 | orchestrator | 2026-04-13 00:22:45.140656 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:45.140732 | orchestrator | Monday 13 April 2026 00:22:40 +0000 (0:00:01.125) 0:00:26.203 ********** 2026-04-13 00:22:45.140751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNExD7/2j4ZW1+B9hz7XY4Kgwnw2H8/Umdzj8bVsEJ4BkoAydLnIBb+xAai45sVu8sDjnHz4SLL/SA4N9/Pilijpsk+gho7eyV0UECR3J6d2JBnuV7W3WOBjygV/79Q+LqcP7FIJGFHKB6Cu0LjjxSoOuVNr/NFLQFOIjVBJevaHwiPiRqequrxaJBlLN96wwxCGnVbmOfwZvs0y3hWgFtwekyWKDhPDEXCUcT8HYzV4nfgBwONJuy/XbwMc6qi9JkSydu/J7txQYsfhAL4qhYNHepGqVJ/2Fs5O4F8wK2B6H1OHct1Q8JtZ8VbHG5Z0r+5/K74jD4iI8T14V+grLvaBIv7OGkuufkOsZAiIGg6cr1Pa207ZRfDFgdZK4t/+ANGkHiIn+7clQ3UlWsTO0BB6yAfjCozZq9WNoPUSvNegAEmVDm2qzvH6qQyaF8VMI73dz1wNKI/yjthtKyt9XA5K92t9LJfmB6VMl/KAxIf5tI29GzZzaA/iqP1XF5cT0=) 2026-04-13 00:22:45.140770 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9RqbbuBKh+24HTlgNOzNN+FyRD6AX61mo5TQDT+zxpsWb6KX2sNJf0NeZQEQlwUbt8G43eYts9Go23NIcb8+g=) 2026-04-13 00:22:45.140787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFwHu4i98AAwTo2y1XAiQVB1p+O3xBX57OGHhz9O5/rr) 2026-04-13 00:22:45.140803 | orchestrator | 2026-04-13 00:22:45.140820 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:45.140837 | orchestrator | Monday 13 April 2026 00:22:42 +0000 (0:00:02.065) 0:00:28.269 ********** 2026-04-13 00:22:45.140855 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCK4Hd3AlAc20DMfZDvufflXXwtcv2mG3hmMRf5PK5dBXl9qPjdA+6ivf+HZiVj7nC9b8bUcg2S8Fyaz7YP9PFitvPdcPK4R+ALjMFE7O2iC3exqtx11bqf9hToVtpAwKpAQr2Fps5xW05tydrehKRrakShAxyrMoJvBdEsm6061fqsaEd9BN93QqwcWIRDX/S1De8GDQv3Og9VGqZqAA8QK4Sb0dgSN8YRYowGqtqrrbr6DsBs83PlZKmnaIJgER+SMqulz4BePGLfYmU/2ooSXLN0XRCJAnwhzwH/VyecTctMwJqmRIp+WwJV5SZz3cMlQhXu/+pJ8a57g0JnyGkGm+B5b+kQW62MyG4YACoMIPp6RJrxM8SHx1s+cfay1QvPLohR3WqA/4/jG9yEfeIi1h384tbjlK4JvG2IBSHadfUjEJtrPoZ2zsT9EQlvKjdIyMA7NgzHPFUwcN3IkS0V5C/yE5YC1y9Iec1iA6Fb19mrMUDu9RJxjmlnYrq9Evk=) 2026-04-13 00:22:45.140872 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLXwZuXHJSZ6iDf7r4MvzhpeEdqCIMEHPoVfN9FN5RioyNhGRXI9NHcC6gJWpK7TvJVst1Ec3FytG5MK0GtmMJ4=) 2026-04-13 00:22:45.140889 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPgVlOZ80dVGekLT/wYpBKa0qHgGVWqu+jGlhcy4BHPF) 2026-04-13 00:22:45.140906 | orchestrator | 2026-04-13 00:22:45.140923 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-13 00:22:45.140940 | orchestrator | Monday 13 April 2026 00:22:44 +0000 (0:00:01.110) 0:00:29.380 ********** 2026-04-13 00:22:45.140957 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-13 00:22:45.140976 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-13 00:22:45.141016 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-13 00:22:45.141034 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-13 00:22:45.141051 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-13 00:22:45.141065 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-13 00:22:45.141079 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-13 00:22:45.141095 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:22:45.141111 | orchestrator | 2026-04-13 00:22:45.141127 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-13 00:22:45.141142 | orchestrator | Monday 13 April 2026 00:22:44 +0000 (0:00:00.179) 0:00:29.560 ********** 2026-04-13 00:22:45.141170 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:22:45.141188 | orchestrator | 2026-04-13 00:22:45.141206 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-13 00:22:45.141221 | orchestrator | Monday 13 April 2026 00:22:44 +0000 (0:00:00.052) 0:00:29.612 ********** 2026-04-13 00:22:45.141236 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:22:45.141253 | orchestrator | 2026-04-13 00:22:45.141270 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-13 00:22:45.141288 | orchestrator | Monday 13 April 2026 00:22:44 +0000 (0:00:00.061) 0:00:29.674 ********** 2026-04-13 00:22:45.141304 | orchestrator | changed: [testbed-manager] 2026-04-13 00:22:45.141321 | orchestrator | 2026-04-13 00:22:45.141336 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:22:45.141353 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:22:45.141370 | orchestrator | 2026-04-13 00:22:45.141386 | orchestrator | 2026-04-13 00:22:45.141402 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:22:45.141418 | orchestrator | Monday 13 April 2026 00:22:44 +0000 (0:00:00.542) 0:00:30.217 ********** 2026-04-13 00:22:45.141434 | orchestrator | =============================================================================== 2026-04-13 00:22:45.141450 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.55s 2026-04-13 00:22:45.141466 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.40s 2026-04-13 00:22:45.141483 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.07s 2026-04-13 00:22:45.141499 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2026-04-13 00:22:45.141516 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-13 00:22:45.141531 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-13 00:22:45.141547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-13 00:22:45.141562 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-13 00:22:45.141578 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-13 00:22:45.141593 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-13 00:22:45.141609 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-13 00:22:45.141636 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-13 00:22:45.141653 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-13 00:22:45.141697 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-13 00:22:45.141715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-13 00:22:45.141727 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-13 00:22:45.141737 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.54s 2026-04-13 00:22:45.141746 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2026-04-13 00:22:45.141756 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-13 00:22:45.141766 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-13 00:22:45.341618 | orchestrator | + osism apply squid 2026-04-13 00:22:56.721195 | orchestrator | 2026-04-13 00:22:56 | INFO  | Prepare task for execution of squid. 2026-04-13 00:22:56.805864 | orchestrator | 2026-04-13 00:22:56 | INFO  | Task 3b2b433a-2731-4687-aa8d-472b1b9a58e4 (squid) was prepared for execution. 2026-04-13 00:22:56.805983 | orchestrator | 2026-04-13 00:22:56 | INFO  | It takes a moment until task 3b2b433a-2731-4687-aa8d-472b1b9a58e4 (squid) has been started and output is visible here. 2026-04-13 00:24:55.426940 | orchestrator | 2026-04-13 00:24:55.427049 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-13 00:24:55.427066 | orchestrator | 2026-04-13 00:24:55.427078 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-13 00:24:55.427090 | orchestrator | Monday 13 April 2026 00:23:00 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-04-13 00:24:55.427101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:24:55.427113 | orchestrator | 2026-04-13 00:24:55.427124 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-13 00:24:55.427135 | orchestrator | Monday 13 April 2026 00:23:00 +0000 (0:00:00.082) 0:00:00.277 ********** 2026-04-13 00:24:55.427146 | orchestrator | ok: [testbed-manager] 2026-04-13 00:24:55.427157 | orchestrator | 2026-04-13 00:24:55.427168 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-13 00:24:55.427179 | orchestrator | Monday 13 April 2026 00:23:02 +0000 (0:00:02.538) 0:00:02.816 ********** 2026-04-13 00:24:55.427191 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-13 00:24:55.427201 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-13 00:24:55.427212 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-13 00:24:55.427223 | orchestrator | 2026-04-13 00:24:55.427234 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-13 00:24:55.427245 | orchestrator | Monday 13 April 2026 00:23:03 +0000 (0:00:01.270) 0:00:04.086 ********** 2026-04-13 00:24:55.427256 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-13 00:24:55.427267 | orchestrator | 2026-04-13 00:24:55.427277 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-13 00:24:55.427288 | orchestrator | Monday 13 April 2026 00:23:05 +0000 (0:00:01.105) 0:00:05.192 ********** 2026-04-13 00:24:55.427298 | orchestrator | ok: [testbed-manager] 2026-04-13 00:24:55.427309 | orchestrator | 2026-04-13 00:24:55.427338 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-13 00:24:55.427350 | orchestrator | Monday 13 April 2026 00:23:05 +0000 (0:00:00.367) 0:00:05.560 ********** 2026-04-13 00:24:55.427360 | orchestrator | changed: [testbed-manager] 2026-04-13 00:24:55.427372 | orchestrator | 2026-04-13 00:24:55.427383 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-13 00:24:55.427393 | orchestrator | Monday 13 April 2026 00:23:06 +0000 (0:00:00.929) 0:00:06.489 ********** 2026-04-13 00:24:55.427404 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-13 00:24:55.427416 | orchestrator | ok: [testbed-manager] 2026-04-13 00:24:55.427427 | orchestrator | 2026-04-13 00:24:55.427437 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-13 00:24:55.427448 | orchestrator | Monday 13 April 2026 00:23:42 +0000 (0:00:35.976) 0:00:42.466 ********** 2026-04-13 00:24:55.427459 | orchestrator | changed: [testbed-manager] 2026-04-13 00:24:55.427470 | orchestrator | 2026-04-13 00:24:55.427481 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-13 00:24:55.427494 | orchestrator | Monday 13 April 2026 00:23:54 +0000 (0:00:12.055) 0:00:54.522 ********** 2026-04-13 00:24:55.427507 | orchestrator | Pausing for 60 seconds 2026-04-13 00:24:55.427587 | orchestrator | changed: [testbed-manager] 2026-04-13 00:24:55.427603 | orchestrator | 2026-04-13 00:24:55.427615 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-13 00:24:55.427628 | orchestrator | Monday 13 April 2026 00:24:54 +0000 (0:01:00.095) 0:01:54.618 ********** 2026-04-13 00:24:55.427640 | orchestrator | ok: [testbed-manager] 2026-04-13 00:24:55.427652 | orchestrator | 2026-04-13 00:24:55.427665 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-13 00:24:55.427701 | orchestrator | Monday 13 April 2026 00:24:54 +0000 (0:00:00.075) 0:01:54.693 ********** 2026-04-13 00:24:55.427714 | orchestrator | changed: [testbed-manager] 2026-04-13 00:24:55.427727 | orchestrator | 2026-04-13 00:24:55.427739 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:24:55.427751 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:24:55.427763 | orchestrator | 2026-04-13 00:24:55.427775 | orchestrator | 2026-04-13 00:24:55.427787 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:24:55.427800 | orchestrator | Monday 13 April 2026 00:24:55 +0000 (0:00:00.624) 0:01:55.318 ********** 2026-04-13 00:24:55.427812 | orchestrator | =============================================================================== 2026-04-13 00:24:55.427824 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-04-13 00:24:55.427837 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.98s 2026-04-13 00:24:55.427848 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.06s 2026-04-13 00:24:55.427859 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.54s 2026-04-13 00:24:55.427869 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.27s 2026-04-13 00:24:55.427880 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2026-04-13 00:24:55.427891 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-04-13 00:24:55.427901 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2026-04-13 00:24:55.427912 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-04-13 00:24:55.427923 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-13 00:24:55.427934 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-04-13 00:24:55.614970 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 00:24:55.615051 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-13 00:24:55.621752 | orchestrator | + set -e 2026-04-13 00:24:55.621840 | orchestrator | + NAMESPACE=kolla 2026-04-13 00:24:55.621852 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-13 00:24:55.659834 | orchestrator | ++ semver latest 9.0.0 2026-04-13 00:24:55.707630 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-13 00:24:55.707724 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 00:24:55.708246 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-13 00:25:07.171862 | orchestrator | 2026-04-13 00:25:07 | INFO  | Prepare task for execution of operator. 2026-04-13 00:25:07.244201 | orchestrator | 2026-04-13 00:25:07 | INFO  | Task fca0b7d8-f652-46a7-a74a-1316ec558e52 (operator) was prepared for execution. 2026-04-13 00:25:07.244260 | orchestrator | 2026-04-13 00:25:07 | INFO  | It takes a moment until task fca0b7d8-f652-46a7-a74a-1316ec558e52 (operator) has been started and output is visible here. 2026-04-13 00:25:21.819597 | orchestrator | 2026-04-13 00:25:21.819709 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-13 00:25:21.819727 | orchestrator | 2026-04-13 00:25:21.819739 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:25:21.819750 | orchestrator | Monday 13 April 2026 00:25:10 +0000 (0:00:00.166) 0:00:00.166 ********** 2026-04-13 00:25:21.819762 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:21.819774 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:21.819785 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:21.819796 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:21.819806 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:21.819821 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:21.819832 | orchestrator | 2026-04-13 00:25:21.819843 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-13 00:25:21.819878 | orchestrator | Monday 13 April 2026 00:25:13 +0000 (0:00:03.044) 0:00:03.211 ********** 2026-04-13 00:25:21.819889 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:21.819900 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:21.819911 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:21.819921 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:21.819932 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:21.819943 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:21.819953 | orchestrator | 2026-04-13 00:25:21.819964 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-13 00:25:21.819975 | orchestrator | 2026-04-13 00:25:21.819985 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-13 00:25:21.819996 | orchestrator | Monday 13 April 2026 00:25:14 +0000 (0:00:00.844) 0:00:04.055 ********** 2026-04-13 00:25:21.820007 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:21.820018 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:21.820028 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:21.820039 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:21.820049 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:21.820060 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:21.820071 | orchestrator | 2026-04-13 00:25:21.820082 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-13 00:25:21.820112 | orchestrator | Monday 13 April 2026 00:25:14 +0000 (0:00:00.157) 0:00:04.212 ********** 2026-04-13 00:25:21.820125 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:21.820137 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:21.820149 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:21.820161 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:21.820173 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:21.820186 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:21.820197 | orchestrator | 2026-04-13 00:25:21.820209 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-13 00:25:21.820221 | orchestrator | Monday 13 April 2026 00:25:14 +0000 (0:00:00.164) 0:00:04.377 ********** 2026-04-13 00:25:21.820234 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:21.820247 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:21.820260 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:21.820272 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:21.820284 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:21.820298 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:21.820310 | orchestrator | 2026-04-13 00:25:21.820322 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-13 00:25:21.820334 | orchestrator | Monday 13 April 2026 00:25:15 +0000 (0:00:00.770) 0:00:05.147 ********** 2026-04-13 00:25:21.820347 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:21.820359 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:21.820370 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:21.820382 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:21.820394 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:21.820407 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:21.820419 | orchestrator | 2026-04-13 00:25:21.820432 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-13 00:25:21.820445 | orchestrator | Monday 13 April 2026 00:25:16 +0000 (0:00:00.878) 0:00:06.026 ********** 2026-04-13 00:25:21.820456 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-13 00:25:21.820467 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-13 00:25:21.820477 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-13 00:25:21.820488 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-13 00:25:21.820523 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-13 00:25:21.820534 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-13 00:25:21.820545 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-13 00:25:21.820555 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-13 00:25:21.820575 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-13 00:25:21.820586 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-13 00:25:21.820596 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-13 00:25:21.820607 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-13 00:25:21.820618 | orchestrator | 2026-04-13 00:25:21.820628 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-13 00:25:21.820639 | orchestrator | Monday 13 April 2026 00:25:17 +0000 (0:00:01.131) 0:00:07.157 ********** 2026-04-13 00:25:21.820650 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:21.820660 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:21.820671 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:21.820681 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:21.820692 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:21.820703 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:21.820713 | orchestrator | 2026-04-13 00:25:21.820724 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-13 00:25:21.820736 | orchestrator | Monday 13 April 2026 00:25:18 +0000 (0:00:01.272) 0:00:08.430 ********** 2026-04-13 00:25:21.820746 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:21.820757 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:21.820768 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:21.820778 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:21.820789 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:21.820817 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:21.820828 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:21.820839 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:21.820849 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:21.820860 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:21.820871 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:21.820881 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:21.820892 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:21.820903 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-13 00:25:21.820914 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-13 00:25:21.820929 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-13 00:25:21.820940 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:21.820951 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:21.820961 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:21.820972 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:21.820982 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:21.820993 | orchestrator | 2026-04-13 00:25:21.821004 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-13 00:25:21.821015 | orchestrator | Monday 13 April 2026 00:25:19 +0000 (0:00:01.335) 0:00:09.765 ********** 2026-04-13 00:25:21.821026 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:21.821036 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:21.821047 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:21.821058 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:21.821068 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:21.821079 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:21.821089 | orchestrator | 2026-04-13 00:25:21.821100 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-13 00:25:21.821118 | orchestrator | Monday 13 April 2026 00:25:19 +0000 (0:00:00.170) 0:00:09.935 ********** 2026-04-13 00:25:21.821128 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:21.821139 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:21.821149 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:21.821160 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:21.821170 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:21.821181 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:21.821191 | orchestrator | 2026-04-13 00:25:21.821202 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-13 00:25:21.821213 | orchestrator | Monday 13 April 2026 00:25:20 +0000 (0:00:00.193) 0:00:10.129 ********** 2026-04-13 00:25:21.821223 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:21.821234 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:21.821244 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:21.821255 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:21.821265 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:21.821276 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:21.821286 | orchestrator | 2026-04-13 00:25:21.821297 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-13 00:25:21.821307 | orchestrator | Monday 13 April 2026 00:25:20 +0000 (0:00:00.527) 0:00:10.657 ********** 2026-04-13 00:25:21.821318 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:21.821328 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:21.821339 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:21.821349 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:21.821360 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:21.821370 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:21.821381 | orchestrator | 2026-04-13 00:25:21.821392 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-13 00:25:21.821402 | orchestrator | Monday 13 April 2026 00:25:20 +0000 (0:00:00.191) 0:00:10.848 ********** 2026-04-13 00:25:21.821413 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-13 00:25:21.821423 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:21.821434 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-13 00:25:21.821445 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:25:21.821455 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:21.821466 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:21.821476 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:25:21.821487 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:21.821520 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:25:21.821531 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:21.821541 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:25:21.821552 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:21.821563 | orchestrator | 2026-04-13 00:25:21.821573 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-13 00:25:21.821584 | orchestrator | Monday 13 April 2026 00:25:21 +0000 (0:00:00.694) 0:00:11.543 ********** 2026-04-13 00:25:21.821595 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:21.821606 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:21.821616 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:21.821627 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:21.821637 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:21.821648 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:21.821658 | orchestrator | 2026-04-13 00:25:21.821669 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-13 00:25:21.821680 | orchestrator | Monday 13 April 2026 00:25:21 +0000 (0:00:00.139) 0:00:11.682 ********** 2026-04-13 00:25:21.821691 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:21.821701 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:21.821712 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:21.821729 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:21.821747 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:23.107295 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:23.107401 | orchestrator | 2026-04-13 00:25:23.107417 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-13 00:25:23.107430 | orchestrator | Monday 13 April 2026 00:25:21 +0000 (0:00:00.142) 0:00:11.825 ********** 2026-04-13 00:25:23.107441 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:23.107452 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:23.107462 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:23.107575 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:23.107591 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:23.107601 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:23.107612 | orchestrator | 2026-04-13 00:25:23.107623 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-13 00:25:23.107634 | orchestrator | Monday 13 April 2026 00:25:21 +0000 (0:00:00.158) 0:00:11.983 ********** 2026-04-13 00:25:23.107645 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:23.107656 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:23.107667 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:23.107689 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:23.107701 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:23.107723 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:23.107734 | orchestrator | 2026-04-13 00:25:23.107745 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-13 00:25:23.107756 | orchestrator | Monday 13 April 2026 00:25:22 +0000 (0:00:00.657) 0:00:12.640 ********** 2026-04-13 00:25:23.107767 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:23.107778 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:23.107788 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:23.107799 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:23.107810 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:23.107821 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:23.107832 | orchestrator | 2026-04-13 00:25:23.107844 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:25:23.107959 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:23.107979 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:23.107992 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:23.108003 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:23.108013 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:23.108024 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:23.108035 | orchestrator | 2026-04-13 00:25:23.108045 | orchestrator | 2026-04-13 00:25:23.108057 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:25:23.108067 | orchestrator | Monday 13 April 2026 00:25:22 +0000 (0:00:00.242) 0:00:12.883 ********** 2026-04-13 00:25:23.108078 | orchestrator | =============================================================================== 2026-04-13 00:25:23.108089 | orchestrator | Gathering Facts --------------------------------------------------------- 3.04s 2026-04-13 00:25:23.108100 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2026-04-13 00:25:23.108112 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-04-13 00:25:23.108146 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.13s 2026-04-13 00:25:23.108157 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2026-04-13 00:25:23.108168 | orchestrator | Do not require tty for all users ---------------------------------------- 0.84s 2026-04-13 00:25:23.108179 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.77s 2026-04-13 00:25:23.108189 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-04-13 00:25:23.108200 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-04-13 00:25:23.108211 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.53s 2026-04-13 00:25:23.108222 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-04-13 00:25:23.108233 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-04-13 00:25:23.108244 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-04-13 00:25:23.108254 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-04-13 00:25:23.108265 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-04-13 00:25:23.108276 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-04-13 00:25:23.108287 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-04-13 00:25:23.108297 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-04-13 00:25:23.108308 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-04-13 00:25:23.296827 | orchestrator | + osism apply --environment custom facts 2026-04-13 00:25:24.623925 | orchestrator | 2026-04-13 00:25:24 | INFO  | Trying to run play facts in environment custom 2026-04-13 00:25:34.770078 | orchestrator | 2026-04-13 00:25:34 | INFO  | Prepare task for execution of facts. 2026-04-13 00:25:34.854578 | orchestrator | 2026-04-13 00:25:34 | INFO  | Task 44257ab1-fc3a-494d-9207-8919e1e1afeb (facts) was prepared for execution. 2026-04-13 00:25:34.854686 | orchestrator | 2026-04-13 00:25:34 | INFO  | It takes a moment until task 44257ab1-fc3a-494d-9207-8919e1e1afeb (facts) has been started and output is visible here. 2026-04-13 00:26:18.266431 | orchestrator | 2026-04-13 00:26:18.266585 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-13 00:26:18.266603 | orchestrator | 2026-04-13 00:26:18.266615 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-13 00:26:18.266646 | orchestrator | Monday 13 April 2026 00:25:38 +0000 (0:00:00.122) 0:00:00.122 ********** 2026-04-13 00:26:18.266657 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:18.266669 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:18.266680 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:18.266690 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:18.266701 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:18.266712 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:18.266723 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:18.266734 | orchestrator | 2026-04-13 00:26:18.266745 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-13 00:26:18.266756 | orchestrator | Monday 13 April 2026 00:25:39 +0000 (0:00:01.512) 0:00:01.634 ********** 2026-04-13 00:26:18.266767 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:18.266778 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:18.266789 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:18.266800 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:18.266811 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:18.266822 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:18.266833 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:18.266865 | orchestrator | 2026-04-13 00:26:18.266877 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-13 00:26:18.266888 | orchestrator | 2026-04-13 00:26:18.266898 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-13 00:26:18.266909 | orchestrator | Monday 13 April 2026 00:25:40 +0000 (0:00:01.350) 0:00:02.984 ********** 2026-04-13 00:26:18.266920 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.266931 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.266941 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.266952 | orchestrator | 2026-04-13 00:26:18.266963 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-13 00:26:18.266974 | orchestrator | Monday 13 April 2026 00:25:41 +0000 (0:00:00.107) 0:00:03.091 ********** 2026-04-13 00:26:18.266985 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.266996 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.267006 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.267017 | orchestrator | 2026-04-13 00:26:18.267028 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-13 00:26:18.267039 | orchestrator | Monday 13 April 2026 00:25:41 +0000 (0:00:00.231) 0:00:03.322 ********** 2026-04-13 00:26:18.267050 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.267060 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.267071 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.267081 | orchestrator | 2026-04-13 00:26:18.267092 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-13 00:26:18.267103 | orchestrator | Monday 13 April 2026 00:25:41 +0000 (0:00:00.233) 0:00:03.556 ********** 2026-04-13 00:26:18.267115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:18.267127 | orchestrator | 2026-04-13 00:26:18.267138 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-13 00:26:18.267149 | orchestrator | Monday 13 April 2026 00:25:41 +0000 (0:00:00.141) 0:00:03.698 ********** 2026-04-13 00:26:18.267160 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.267170 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.267181 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.267191 | orchestrator | 2026-04-13 00:26:18.267202 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-13 00:26:18.267213 | orchestrator | Monday 13 April 2026 00:25:42 +0000 (0:00:00.479) 0:00:04.178 ********** 2026-04-13 00:26:18.267224 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:18.267234 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:18.267245 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:18.267256 | orchestrator | 2026-04-13 00:26:18.267266 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-13 00:26:18.267277 | orchestrator | Monday 13 April 2026 00:25:42 +0000 (0:00:00.124) 0:00:04.302 ********** 2026-04-13 00:26:18.267288 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:18.267299 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:18.267309 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:18.267320 | orchestrator | 2026-04-13 00:26:18.267331 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-13 00:26:18.267342 | orchestrator | Monday 13 April 2026 00:25:43 +0000 (0:00:01.048) 0:00:05.351 ********** 2026-04-13 00:26:18.267352 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.267363 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.267374 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.267385 | orchestrator | 2026-04-13 00:26:18.267395 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-13 00:26:18.267406 | orchestrator | Monday 13 April 2026 00:25:43 +0000 (0:00:00.465) 0:00:05.816 ********** 2026-04-13 00:26:18.267417 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:18.267428 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:18.267457 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:18.267475 | orchestrator | 2026-04-13 00:26:18.267487 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-13 00:26:18.267497 | orchestrator | Monday 13 April 2026 00:25:44 +0000 (0:00:01.038) 0:00:06.854 ********** 2026-04-13 00:26:18.267508 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:18.267519 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:18.267529 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:18.267540 | orchestrator | 2026-04-13 00:26:18.267551 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-13 00:26:18.267561 | orchestrator | Monday 13 April 2026 00:26:01 +0000 (0:00:16.641) 0:00:23.496 ********** 2026-04-13 00:26:18.267572 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:18.267583 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:18.267594 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:18.267604 | orchestrator | 2026-04-13 00:26:18.267615 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-13 00:26:18.267643 | orchestrator | Monday 13 April 2026 00:26:01 +0000 (0:00:00.102) 0:00:23.598 ********** 2026-04-13 00:26:18.267655 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:18.267666 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:18.267677 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:18.267688 | orchestrator | 2026-04-13 00:26:18.267699 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-13 00:26:18.267710 | orchestrator | Monday 13 April 2026 00:26:09 +0000 (0:00:07.608) 0:00:31.207 ********** 2026-04-13 00:26:18.267720 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.267731 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.267742 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.267753 | orchestrator | 2026-04-13 00:26:18.267764 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-13 00:26:18.267775 | orchestrator | Monday 13 April 2026 00:26:09 +0000 (0:00:00.445) 0:00:31.652 ********** 2026-04-13 00:26:18.267785 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-13 00:26:18.267797 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-13 00:26:18.267807 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-13 00:26:18.267818 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-13 00:26:18.267829 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-13 00:26:18.267839 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-13 00:26:18.267850 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-13 00:26:18.267861 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-13 00:26:18.267872 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-13 00:26:18.267883 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-13 00:26:18.267893 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-13 00:26:18.267904 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-13 00:26:18.267915 | orchestrator | 2026-04-13 00:26:18.267925 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-13 00:26:18.267936 | orchestrator | Monday 13 April 2026 00:26:13 +0000 (0:00:03.500) 0:00:35.153 ********** 2026-04-13 00:26:18.267947 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.267958 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.267969 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.267979 | orchestrator | 2026-04-13 00:26:18.267990 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:26:18.268001 | orchestrator | 2026-04-13 00:26:18.268012 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:26:18.268061 | orchestrator | Monday 13 April 2026 00:26:14 +0000 (0:00:01.276) 0:00:36.429 ********** 2026-04-13 00:26:18.268080 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:18.268092 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:18.268102 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:18.268113 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:18.268124 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:18.268134 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:18.268145 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:18.268156 | orchestrator | 2026-04-13 00:26:18.268166 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:26:18.268177 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:18.268189 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:18.268201 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:18.268211 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:18.268222 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:26:18.268233 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:26:18.268244 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:26:18.268255 | orchestrator | 2026-04-13 00:26:18.268265 | orchestrator | 2026-04-13 00:26:18.268276 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:26:18.268287 | orchestrator | Monday 13 April 2026 00:26:18 +0000 (0:00:03.879) 0:00:40.309 ********** 2026-04-13 00:26:18.268298 | orchestrator | =============================================================================== 2026-04-13 00:26:18.268309 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.64s 2026-04-13 00:26:18.268319 | orchestrator | Install required packages (Debian) -------------------------------------- 7.61s 2026-04-13 00:26:18.268330 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2026-04-13 00:26:18.268340 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2026-04-13 00:26:18.268351 | orchestrator | Create custom facts directory ------------------------------------------- 1.51s 2026-04-13 00:26:18.268362 | orchestrator | Copy fact file ---------------------------------------------------------- 1.35s 2026-04-13 00:26:18.268379 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.28s 2026-04-13 00:26:18.463905 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-04-13 00:26:18.464008 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2026-04-13 00:26:18.464019 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2026-04-13 00:26:18.464027 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-04-13 00:26:18.464034 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-04-13 00:26:18.464042 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-04-13 00:26:18.464049 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-04-13 00:26:18.464056 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-04-13 00:26:18.464065 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-04-13 00:26:18.464072 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-04-13 00:26:18.464096 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-13 00:26:18.677703 | orchestrator | + osism apply bootstrap 2026-04-13 00:26:30.130422 | orchestrator | 2026-04-13 00:26:30 | INFO  | Prepare task for execution of bootstrap. 2026-04-13 00:26:30.215826 | orchestrator | 2026-04-13 00:26:30 | INFO  | Task 745a2d8b-4195-4e31-9383-a37bd2fd0375 (bootstrap) was prepared for execution. 2026-04-13 00:26:30.215918 | orchestrator | 2026-04-13 00:26:30 | INFO  | It takes a moment until task 745a2d8b-4195-4e31-9383-a37bd2fd0375 (bootstrap) has been started and output is visible here. 2026-04-13 00:26:47.070353 | orchestrator | 2026-04-13 00:26:47.070493 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-13 00:26:47.070517 | orchestrator | 2026-04-13 00:26:47.070529 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-13 00:26:47.070540 | orchestrator | Monday 13 April 2026 00:26:33 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-04-13 00:26:47.070551 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:47.070561 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:47.070571 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:47.070581 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:47.070591 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:47.070601 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:47.070610 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:47.070620 | orchestrator | 2026-04-13 00:26:47.070630 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:26:47.070640 | orchestrator | 2026-04-13 00:26:47.070650 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:26:47.070660 | orchestrator | Monday 13 April 2026 00:26:33 +0000 (0:00:00.320) 0:00:00.515 ********** 2026-04-13 00:26:47.070670 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:47.070680 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:47.070690 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:47.070700 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:47.070709 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:47.070719 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:47.070729 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:47.070738 | orchestrator | 2026-04-13 00:26:47.070748 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-13 00:26:47.070757 | orchestrator | 2026-04-13 00:26:47.070767 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:26:47.070777 | orchestrator | Monday 13 April 2026 00:26:39 +0000 (0:00:05.584) 0:00:06.100 ********** 2026-04-13 00:26:47.070788 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-13 00:26:47.070797 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-13 00:26:47.070807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-13 00:26:47.070817 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-13 00:26:47.070827 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:26:47.070836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:26:47.070846 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-13 00:26:47.070856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:26:47.070865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-13 00:26:47.070875 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-13 00:26:47.070885 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-13 00:26:47.070895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-13 00:26:47.070907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-13 00:26:47.070918 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-13 00:26:47.070930 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:47.070942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-13 00:26:47.070974 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-13 00:26:47.070986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-13 00:26:47.070998 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-13 00:26:47.071009 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-13 00:26:47.071020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 00:26:47.071031 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:47.071043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-13 00:26:47.071054 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-13 00:26:47.071066 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-13 00:26:47.071077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 00:26:47.071088 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-13 00:26:47.071100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 00:26:47.071111 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-13 00:26:47.071123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-13 00:26:47.071134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:26:47.071145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-13 00:26:47.071157 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:47.071168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:26:47.071179 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-13 00:26:47.071191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-13 00:26:47.071205 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-13 00:26:47.071221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:26:47.071232 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:47.071244 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-13 00:26:47.071256 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-13 00:26:47.071266 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-13 00:26:47.071275 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-13 00:26:47.071285 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-13 00:26:47.071295 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-13 00:26:47.071305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-13 00:26:47.071332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-13 00:26:47.071343 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-13 00:26:47.071352 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:47.071362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-13 00:26:47.071372 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:47.071381 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-13 00:26:47.071391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-13 00:26:47.071401 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-13 00:26:47.071432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-13 00:26:47.071443 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:47.071452 | orchestrator | 2026-04-13 00:26:47.071462 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-13 00:26:47.071472 | orchestrator | 2026-04-13 00:26:47.071482 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-13 00:26:47.071492 | orchestrator | Monday 13 April 2026 00:26:40 +0000 (0:00:00.560) 0:00:06.660 ********** 2026-04-13 00:26:47.071501 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:47.071519 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:47.071529 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:47.071538 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:47.071548 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:47.071557 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:47.071567 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:47.071581 | orchestrator | 2026-04-13 00:26:47.071597 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-13 00:26:47.071615 | orchestrator | Monday 13 April 2026 00:26:41 +0000 (0:00:01.272) 0:00:07.933 ********** 2026-04-13 00:26:47.071631 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:47.071646 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:47.071661 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:47.071679 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:47.071696 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:47.071713 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:47.071730 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:47.071750 | orchestrator | 2026-04-13 00:26:47.071769 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-13 00:26:47.071786 | orchestrator | Monday 13 April 2026 00:26:42 +0000 (0:00:01.255) 0:00:09.189 ********** 2026-04-13 00:26:47.071807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:47.071826 | orchestrator | 2026-04-13 00:26:47.071837 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-13 00:26:47.071848 | orchestrator | Monday 13 April 2026 00:26:42 +0000 (0:00:00.317) 0:00:09.507 ********** 2026-04-13 00:26:47.071859 | orchestrator | changed: [testbed-manager] 2026-04-13 00:26:47.071870 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:47.071883 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:47.071900 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:47.071911 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:47.071922 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:47.071932 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:47.071943 | orchestrator | 2026-04-13 00:26:47.071954 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-13 00:26:47.071965 | orchestrator | Monday 13 April 2026 00:26:44 +0000 (0:00:01.598) 0:00:11.106 ********** 2026-04-13 00:26:47.071976 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:47.071988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:47.072000 | orchestrator | 2026-04-13 00:26:47.072011 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-13 00:26:47.072039 | orchestrator | Monday 13 April 2026 00:26:44 +0000 (0:00:00.319) 0:00:11.426 ********** 2026-04-13 00:26:47.072051 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:47.072062 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:47.072078 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:47.072089 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:47.072099 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:47.072110 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:47.072120 | orchestrator | 2026-04-13 00:26:47.072131 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-13 00:26:47.072142 | orchestrator | Monday 13 April 2026 00:26:45 +0000 (0:00:01.030) 0:00:12.456 ********** 2026-04-13 00:26:47.072153 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:47.072163 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:47.072174 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:47.072184 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:47.072195 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:47.072205 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:47.072225 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:47.072236 | orchestrator | 2026-04-13 00:26:47.072246 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-13 00:26:47.072257 | orchestrator | Monday 13 April 2026 00:26:46 +0000 (0:00:00.611) 0:00:13.068 ********** 2026-04-13 00:26:47.072273 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:47.072289 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:47.072300 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:47.072310 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:47.072323 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:47.072340 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:47.072351 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:47.072362 | orchestrator | 2026-04-13 00:26:47.072373 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-13 00:26:47.072385 | orchestrator | Monday 13 April 2026 00:26:46 +0000 (0:00:00.417) 0:00:13.485 ********** 2026-04-13 00:26:47.072395 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:47.072406 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:47.072452 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:59.297345 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:59.297511 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:59.297538 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:59.297557 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:59.297576 | orchestrator | 2026-04-13 00:26:59.297596 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-13 00:26:59.297616 | orchestrator | Monday 13 April 2026 00:26:47 +0000 (0:00:00.232) 0:00:13.718 ********** 2026-04-13 00:26:59.297638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:59.297677 | orchestrator | 2026-04-13 00:26:59.297698 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-13 00:26:59.297718 | orchestrator | Monday 13 April 2026 00:26:47 +0000 (0:00:00.301) 0:00:14.020 ********** 2026-04-13 00:26:59.297737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:59.297757 | orchestrator | 2026-04-13 00:26:59.297778 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-13 00:26:59.297797 | orchestrator | Monday 13 April 2026 00:26:47 +0000 (0:00:00.334) 0:00:14.355 ********** 2026-04-13 00:26:59.297818 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.297839 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.297862 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.297882 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.297904 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.297927 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.297948 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.297970 | orchestrator | 2026-04-13 00:26:59.297993 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-13 00:26:59.298088 | orchestrator | Monday 13 April 2026 00:26:49 +0000 (0:00:01.345) 0:00:15.700 ********** 2026-04-13 00:26:59.298113 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:59.298134 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:59.298154 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:59.298173 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:59.298193 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:59.298213 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:59.298231 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:59.298251 | orchestrator | 2026-04-13 00:26:59.298271 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-13 00:26:59.298319 | orchestrator | Monday 13 April 2026 00:26:49 +0000 (0:00:00.229) 0:00:15.930 ********** 2026-04-13 00:26:59.298339 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.298359 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.298380 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.298424 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.298446 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.298464 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.298482 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.298502 | orchestrator | 2026-04-13 00:26:59.298522 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-13 00:26:59.298540 | orchestrator | Monday 13 April 2026 00:26:49 +0000 (0:00:00.529) 0:00:16.459 ********** 2026-04-13 00:26:59.298558 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:59.298576 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:59.298595 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:59.298613 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:59.298632 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:59.298651 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:59.298669 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:59.298689 | orchestrator | 2026-04-13 00:26:59.298701 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-13 00:26:59.298712 | orchestrator | Monday 13 April 2026 00:26:50 +0000 (0:00:00.248) 0:00:16.708 ********** 2026-04-13 00:26:59.298723 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.298745 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:59.298757 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:59.298767 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:59.298777 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:59.298788 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:59.298799 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:59.298809 | orchestrator | 2026-04-13 00:26:59.298820 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-13 00:26:59.298831 | orchestrator | Monday 13 April 2026 00:26:50 +0000 (0:00:00.651) 0:00:17.359 ********** 2026-04-13 00:26:59.298841 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.298852 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:59.298862 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:59.298873 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:59.298883 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:59.298894 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:59.298904 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:59.298915 | orchestrator | 2026-04-13 00:26:59.298926 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-13 00:26:59.298936 | orchestrator | Monday 13 April 2026 00:26:51 +0000 (0:00:01.144) 0:00:18.503 ********** 2026-04-13 00:26:59.298947 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.298958 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.298968 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.298979 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.298989 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.299000 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.299010 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.299021 | orchestrator | 2026-04-13 00:26:59.299032 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-13 00:26:59.299043 | orchestrator | Monday 13 April 2026 00:26:53 +0000 (0:00:01.041) 0:00:19.545 ********** 2026-04-13 00:26:59.299077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:59.299090 | orchestrator | 2026-04-13 00:26:59.299101 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-13 00:26:59.299124 | orchestrator | Monday 13 April 2026 00:26:53 +0000 (0:00:00.337) 0:00:19.883 ********** 2026-04-13 00:26:59.299135 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:59.299145 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:59.299156 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:59.299166 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:59.299177 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:59.299187 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:59.299197 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:59.299208 | orchestrator | 2026-04-13 00:26:59.299219 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-13 00:26:59.299230 | orchestrator | Monday 13 April 2026 00:26:54 +0000 (0:00:01.407) 0:00:21.291 ********** 2026-04-13 00:26:59.299240 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.299251 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.299261 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.299272 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.299282 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.299292 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.299303 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.299313 | orchestrator | 2026-04-13 00:26:59.299324 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-13 00:26:59.299335 | orchestrator | Monday 13 April 2026 00:26:55 +0000 (0:00:00.264) 0:00:21.555 ********** 2026-04-13 00:26:59.299346 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.299356 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.299367 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.299377 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.299387 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.299452 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.299467 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.299478 | orchestrator | 2026-04-13 00:26:59.299488 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-13 00:26:59.299499 | orchestrator | Monday 13 April 2026 00:26:55 +0000 (0:00:00.241) 0:00:21.796 ********** 2026-04-13 00:26:59.299510 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.299520 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.299530 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.299541 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.299551 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.299561 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.299572 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.299582 | orchestrator | 2026-04-13 00:26:59.299593 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-13 00:26:59.299603 | orchestrator | Monday 13 April 2026 00:26:55 +0000 (0:00:00.236) 0:00:22.033 ********** 2026-04-13 00:26:59.299615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:59.299626 | orchestrator | 2026-04-13 00:26:59.299641 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-13 00:26:59.299660 | orchestrator | Monday 13 April 2026 00:26:55 +0000 (0:00:00.317) 0:00:22.350 ********** 2026-04-13 00:26:59.299679 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.299697 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.299714 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.299731 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.299748 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.299764 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.299783 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.299800 | orchestrator | 2026-04-13 00:26:59.299818 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-13 00:26:59.299838 | orchestrator | Monday 13 April 2026 00:26:56 +0000 (0:00:00.627) 0:00:22.977 ********** 2026-04-13 00:26:59.299857 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:59.299890 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:59.299903 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:59.299914 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:59.299924 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:59.299935 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:59.299946 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:59.299956 | orchestrator | 2026-04-13 00:26:59.299966 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-13 00:26:59.299977 | orchestrator | Monday 13 April 2026 00:26:56 +0000 (0:00:00.250) 0:00:23.228 ********** 2026-04-13 00:26:59.299988 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.299998 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:59.300009 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:59.300019 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.300030 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:59.300040 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.300051 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.300061 | orchestrator | 2026-04-13 00:26:59.300072 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-13 00:26:59.300083 | orchestrator | Monday 13 April 2026 00:26:57 +0000 (0:00:01.055) 0:00:24.284 ********** 2026-04-13 00:26:59.300094 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.300104 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:59.300115 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:59.300125 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.300135 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:59.300146 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:59.300156 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:59.300167 | orchestrator | 2026-04-13 00:26:59.300177 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-13 00:26:59.300188 | orchestrator | Monday 13 April 2026 00:26:58 +0000 (0:00:00.560) 0:00:24.844 ********** 2026-04-13 00:26:59.300199 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:59.300209 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:59.300220 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:59.300230 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:59.300253 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.412690 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:42.412800 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.412817 | orchestrator | 2026-04-13 00:27:42.412829 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-13 00:27:42.412842 | orchestrator | Monday 13 April 2026 00:26:59 +0000 (0:00:01.042) 0:00:25.886 ********** 2026-04-13 00:27:42.412853 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.412865 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.412875 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.412886 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:42.412898 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:42.412908 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:42.412919 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:42.412930 | orchestrator | 2026-04-13 00:27:42.412941 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-13 00:27:42.412953 | orchestrator | Monday 13 April 2026 00:27:16 +0000 (0:00:16.675) 0:00:42.562 ********** 2026-04-13 00:27:42.412964 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.412974 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.412985 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.412996 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.413007 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.413017 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.413028 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.413039 | orchestrator | 2026-04-13 00:27:42.413050 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-13 00:27:42.413061 | orchestrator | Monday 13 April 2026 00:27:16 +0000 (0:00:00.242) 0:00:42.805 ********** 2026-04-13 00:27:42.413095 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.413107 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.413117 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.413128 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.413139 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.413149 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.413160 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.413170 | orchestrator | 2026-04-13 00:27:42.413181 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-13 00:27:42.413192 | orchestrator | Monday 13 April 2026 00:27:16 +0000 (0:00:00.219) 0:00:43.024 ********** 2026-04-13 00:27:42.413203 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.413213 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.413224 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.413235 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.413248 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.413260 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.413272 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.413284 | orchestrator | 2026-04-13 00:27:42.413297 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-13 00:27:42.413310 | orchestrator | Monday 13 April 2026 00:27:16 +0000 (0:00:00.232) 0:00:43.257 ********** 2026-04-13 00:27:42.413324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:42.413338 | orchestrator | 2026-04-13 00:27:42.413397 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-13 00:27:42.413411 | orchestrator | Monday 13 April 2026 00:27:17 +0000 (0:00:00.289) 0:00:43.546 ********** 2026-04-13 00:27:42.413424 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.413436 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.413448 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.413460 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.413472 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.413485 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.413497 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.413509 | orchestrator | 2026-04-13 00:27:42.413522 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-13 00:27:42.413535 | orchestrator | Monday 13 April 2026 00:27:18 +0000 (0:00:01.682) 0:00:45.229 ********** 2026-04-13 00:27:42.413548 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:42.413561 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:42.413572 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:42.413583 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:42.413594 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:42.413610 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:42.413621 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:42.413633 | orchestrator | 2026-04-13 00:27:42.413643 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-13 00:27:42.413654 | orchestrator | Monday 13 April 2026 00:27:19 +0000 (0:00:01.172) 0:00:46.401 ********** 2026-04-13 00:27:42.413666 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.413677 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.413687 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.413698 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.413709 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.413720 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.413731 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.413741 | orchestrator | 2026-04-13 00:27:42.413752 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-13 00:27:42.413763 | orchestrator | Monday 13 April 2026 00:27:20 +0000 (0:00:00.807) 0:00:47.208 ********** 2026-04-13 00:27:42.413775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:42.413796 | orchestrator | 2026-04-13 00:27:42.413807 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-13 00:27:42.413819 | orchestrator | Monday 13 April 2026 00:27:20 +0000 (0:00:00.316) 0:00:47.525 ********** 2026-04-13 00:27:42.413829 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:42.413841 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:42.413852 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:42.413863 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:42.413874 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:42.413885 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:42.413895 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:42.413906 | orchestrator | 2026-04-13 00:27:42.413935 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-13 00:27:42.413947 | orchestrator | Monday 13 April 2026 00:27:22 +0000 (0:00:01.084) 0:00:48.609 ********** 2026-04-13 00:27:42.413958 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:27:42.413969 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:27:42.413979 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:27:42.413990 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:27:42.414001 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:27:42.414070 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:27:42.414084 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:27:42.414095 | orchestrator | 2026-04-13 00:27:42.414106 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-13 00:27:42.414117 | orchestrator | Monday 13 April 2026 00:27:22 +0000 (0:00:00.246) 0:00:48.856 ********** 2026-04-13 00:27:42.414128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:42.414139 | orchestrator | 2026-04-13 00:27:42.414150 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-13 00:27:42.414161 | orchestrator | Monday 13 April 2026 00:27:22 +0000 (0:00:00.290) 0:00:49.146 ********** 2026-04-13 00:27:42.414172 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.414183 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.414193 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.414204 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.414215 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.414226 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.414236 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.414247 | orchestrator | 2026-04-13 00:27:42.414258 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-13 00:27:42.414269 | orchestrator | Monday 13 April 2026 00:27:24 +0000 (0:00:01.718) 0:00:50.865 ********** 2026-04-13 00:27:42.414280 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:42.414290 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:42.414301 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:42.414312 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:42.414322 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:42.414333 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:42.414344 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:42.414355 | orchestrator | 2026-04-13 00:27:42.414405 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-13 00:27:42.414416 | orchestrator | Monday 13 April 2026 00:27:25 +0000 (0:00:01.126) 0:00:51.991 ********** 2026-04-13 00:27:42.414427 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:42.414438 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:42.414449 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:42.414460 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:42.414471 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:42.414481 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:42.414501 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:42.414512 | orchestrator | 2026-04-13 00:27:42.414523 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-13 00:27:42.414534 | orchestrator | Monday 13 April 2026 00:27:39 +0000 (0:00:13.566) 0:01:05.558 ********** 2026-04-13 00:27:42.414545 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.414556 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.414567 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.414577 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.414588 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.414599 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.414609 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.414620 | orchestrator | 2026-04-13 00:27:42.414631 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-13 00:27:42.414642 | orchestrator | Monday 13 April 2026 00:27:40 +0000 (0:00:01.646) 0:01:07.204 ********** 2026-04-13 00:27:42.414653 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.414664 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.414674 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.414685 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.414696 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.414707 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.414723 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.414734 | orchestrator | 2026-04-13 00:27:42.414745 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-13 00:27:42.414756 | orchestrator | Monday 13 April 2026 00:27:41 +0000 (0:00:00.938) 0:01:08.143 ********** 2026-04-13 00:27:42.414767 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.414778 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.414789 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.414799 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.414810 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.414821 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.414832 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.414842 | orchestrator | 2026-04-13 00:27:42.414853 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-13 00:27:42.414864 | orchestrator | Monday 13 April 2026 00:27:41 +0000 (0:00:00.230) 0:01:08.374 ********** 2026-04-13 00:27:42.414875 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:42.414886 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:42.414897 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:42.414907 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:42.414918 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:42.414929 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:42.414939 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:42.414950 | orchestrator | 2026-04-13 00:27:42.414961 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-13 00:27:42.414972 | orchestrator | Monday 13 April 2026 00:27:42 +0000 (0:00:00.232) 0:01:08.606 ********** 2026-04-13 00:27:42.414983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:42.414995 | orchestrator | 2026-04-13 00:27:42.415014 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-13 00:30:05.593864 | orchestrator | Monday 13 April 2026 00:27:42 +0000 (0:00:00.324) 0:01:08.930 ********** 2026-04-13 00:30:05.593977 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:05.593996 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:05.594009 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:05.594112 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:05.594134 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:05.594153 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:05.594171 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:05.594183 | orchestrator | 2026-04-13 00:30:05.594195 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-13 00:30:05.594232 | orchestrator | Monday 13 April 2026 00:27:44 +0000 (0:00:01.870) 0:01:10.801 ********** 2026-04-13 00:30:05.594244 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:05.594342 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:05.594354 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:05.594364 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:05.594375 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:05.594389 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:05.594401 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:05.594415 | orchestrator | 2026-04-13 00:30:05.594429 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-13 00:30:05.594443 | orchestrator | Monday 13 April 2026 00:27:44 +0000 (0:00:00.562) 0:01:11.364 ********** 2026-04-13 00:30:05.594456 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:05.594469 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:05.594481 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:05.594493 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:05.594505 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:05.594518 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:05.594531 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:05.594543 | orchestrator | 2026-04-13 00:30:05.594556 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-13 00:30:05.594568 | orchestrator | Monday 13 April 2026 00:27:45 +0000 (0:00:00.228) 0:01:11.592 ********** 2026-04-13 00:30:05.594581 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:05.594593 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:05.594604 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:05.594615 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:05.594625 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:05.594636 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:05.594646 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:05.594657 | orchestrator | 2026-04-13 00:30:05.594670 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-13 00:30:05.594689 | orchestrator | Monday 13 April 2026 00:27:46 +0000 (0:00:01.232) 0:01:12.825 ********** 2026-04-13 00:30:05.594750 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:05.594764 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:05.594775 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:05.594785 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:05.594796 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:05.594806 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:05.594817 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:05.594827 | orchestrator | 2026-04-13 00:30:05.594838 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-13 00:30:05.594849 | orchestrator | Monday 13 April 2026 00:27:48 +0000 (0:00:01.944) 0:01:14.770 ********** 2026-04-13 00:30:05.594859 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:05.594870 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:05.594880 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:05.594892 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:05.594902 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:05.594913 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:05.594923 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:05.594934 | orchestrator | 2026-04-13 00:30:05.594944 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-13 00:30:05.594955 | orchestrator | Monday 13 April 2026 00:27:51 +0000 (0:00:02.886) 0:01:17.657 ********** 2026-04-13 00:30:05.594966 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:05.594976 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:05.594986 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:05.594997 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:05.595007 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:05.595017 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:05.595028 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:05.595048 | orchestrator | 2026-04-13 00:30:05.595059 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-13 00:30:05.595084 | orchestrator | Monday 13 April 2026 00:28:30 +0000 (0:00:39.462) 0:01:57.119 ********** 2026-04-13 00:30:05.595095 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:05.595105 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:05.595116 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:05.595127 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:05.595138 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:05.595148 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:05.595159 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:05.595170 | orchestrator | 2026-04-13 00:30:05.595180 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-13 00:30:05.595191 | orchestrator | Monday 13 April 2026 00:29:49 +0000 (0:01:18.831) 0:03:15.951 ********** 2026-04-13 00:30:05.595202 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:05.595213 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:05.595224 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:05.595234 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:05.595270 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:05.595301 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:05.595320 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:05.595338 | orchestrator | 2026-04-13 00:30:05.595356 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-13 00:30:05.595373 | orchestrator | Monday 13 April 2026 00:29:51 +0000 (0:00:01.790) 0:03:17.741 ********** 2026-04-13 00:30:05.595388 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:05.595405 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:05.595424 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:05.595441 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:05.595460 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:05.595478 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:05.595496 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:05.595515 | orchestrator | 2026-04-13 00:30:05.595526 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-13 00:30:05.595537 | orchestrator | Monday 13 April 2026 00:30:04 +0000 (0:00:13.249) 0:03:30.991 ********** 2026-04-13 00:30:05.595583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-13 00:30:05.595608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-13 00:30:05.595624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-13 00:30:05.595637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-13 00:30:05.595659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-13 00:30:05.595675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-13 00:30:05.595686 | orchestrator | 2026-04-13 00:30:05.595697 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-13 00:30:05.595708 | orchestrator | Monday 13 April 2026 00:30:04 +0000 (0:00:00.402) 0:03:31.393 ********** 2026-04-13 00:30:05.595719 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:05.595730 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:05.595741 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:05.595752 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:05.595762 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:05.595773 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:05.595791 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:05.595803 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:05.595814 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:30:05.595825 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:30:05.595836 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:30:05.595847 | orchestrator | 2026-04-13 00:30:05.595857 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-13 00:30:05.595868 | orchestrator | Monday 13 April 2026 00:30:05 +0000 (0:00:00.647) 0:03:32.041 ********** 2026-04-13 00:30:05.595879 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:05.595891 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:05.595902 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:05.595913 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:05.595923 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:05.595941 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:10.206608 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:10.206674 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:10.206686 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:10.206696 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:10.206707 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:10.206718 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:10.206727 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:10.206737 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:10.206766 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:10.206775 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:10.206784 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:10.206794 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:10.206803 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:10.206812 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:10.206821 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:10.206830 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:10.206838 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:10.206848 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:10.206857 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:10.206866 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:10.206875 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:10.206883 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:10.206893 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:10.206902 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:10.206910 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:10.206919 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:10.206928 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:10.206937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:10.206946 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:10.206964 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:10.206974 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:10.206983 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:10.206993 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:10.207003 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:10.207009 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:10.207014 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:10.207020 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:10.207025 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:10.207030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-13 00:30:10.207036 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-13 00:30:10.207041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-13 00:30:10.207051 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-13 00:30:10.207056 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-13 00:30:10.207073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-13 00:30:10.207078 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-13 00:30:10.207084 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-13 00:30:10.207089 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-13 00:30:10.207094 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-13 00:30:10.207100 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-13 00:30:10.207105 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-13 00:30:10.207110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-13 00:30:10.207116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-13 00:30:10.207121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-13 00:30:10.207126 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-13 00:30:10.207131 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-13 00:30:10.207137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-13 00:30:10.207142 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-13 00:30:10.207148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-13 00:30:10.207153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-13 00:30:10.207158 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-13 00:30:10.207163 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-13 00:30:10.207169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-13 00:30:10.207174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-13 00:30:10.207179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-13 00:30:10.207185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-13 00:30:10.207190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-13 00:30:10.207195 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-13 00:30:10.207201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-13 00:30:10.207206 | orchestrator | 2026-04-13 00:30:10.207212 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-13 00:30:10.207218 | orchestrator | Monday 13 April 2026 00:30:09 +0000 (0:00:03.660) 0:03:35.701 ********** 2026-04-13 00:30:10.207223 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:10.207228 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:10.207236 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:10.207263 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:10.207277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:10.207286 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:10.207294 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:10.207299 | orchestrator | 2026-04-13 00:30:10.207305 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-13 00:30:10.207310 | orchestrator | Monday 13 April 2026 00:30:09 +0000 (0:00:00.530) 0:03:36.232 ********** 2026-04-13 00:30:10.207315 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:10.207321 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:10.207326 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:10.207331 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:10.207337 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:10.207342 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:10.207347 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:10.207352 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:10.207358 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:10.207363 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:10.207372 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:24.128207 | orchestrator | 2026-04-13 00:30:24.128376 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-13 00:30:24.128393 | orchestrator | Monday 13 April 2026 00:30:10 +0000 (0:00:00.532) 0:03:36.764 ********** 2026-04-13 00:30:24.128405 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:24.128417 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:24.128430 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:24.128441 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:24.128452 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:24.128463 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:24.128473 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:24.128484 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:24.128495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:24.128506 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:24.128517 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:24.128528 | orchestrator | 2026-04-13 00:30:24.128539 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-13 00:30:24.128550 | orchestrator | Monday 13 April 2026 00:30:10 +0000 (0:00:00.519) 0:03:37.284 ********** 2026-04-13 00:30:24.128561 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:24.128572 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:24.128594 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:24.128606 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:24.128641 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:24.128653 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:24.128664 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:24.128674 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:24.128685 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-13 00:30:24.128696 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-13 00:30:24.128706 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-13 00:30:24.128717 | orchestrator | 2026-04-13 00:30:24.128728 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-13 00:30:24.128741 | orchestrator | Monday 13 April 2026 00:30:12 +0000 (0:00:01.615) 0:03:38.899 ********** 2026-04-13 00:30:24.128754 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:24.128767 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:24.128779 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:24.128792 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:24.128804 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:24.128817 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:24.128829 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:24.128842 | orchestrator | 2026-04-13 00:30:24.128854 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-13 00:30:24.128867 | orchestrator | Monday 13 April 2026 00:30:12 +0000 (0:00:00.298) 0:03:39.197 ********** 2026-04-13 00:30:24.128880 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:24.128893 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:24.128905 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:24.128917 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:24.128930 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:24.128942 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:24.128954 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:24.128966 | orchestrator | 2026-04-13 00:30:24.128979 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-13 00:30:24.128991 | orchestrator | Monday 13 April 2026 00:30:18 +0000 (0:00:05.881) 0:03:45.079 ********** 2026-04-13 00:30:24.129002 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-13 00:30:24.129013 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-13 00:30:24.129024 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:24.129035 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:24.129046 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-13 00:30:24.129056 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-13 00:30:24.129067 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:24.129078 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:24.129088 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-13 00:30:24.129099 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-13 00:30:24.129110 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:24.129121 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:24.129131 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-13 00:30:24.129142 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:24.129153 | orchestrator | 2026-04-13 00:30:24.129164 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-13 00:30:24.129174 | orchestrator | Monday 13 April 2026 00:30:18 +0000 (0:00:00.333) 0:03:45.412 ********** 2026-04-13 00:30:24.129185 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-13 00:30:24.129196 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-13 00:30:24.129207 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-13 00:30:24.129258 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-13 00:30:24.129271 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-13 00:30:24.129282 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-13 00:30:24.129301 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-13 00:30:24.129312 | orchestrator | 2026-04-13 00:30:24.129323 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-13 00:30:24.129334 | orchestrator | Monday 13 April 2026 00:30:19 +0000 (0:00:01.067) 0:03:46.480 ********** 2026-04-13 00:30:24.129347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:30:24.129360 | orchestrator | 2026-04-13 00:30:24.129371 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-13 00:30:24.129382 | orchestrator | Monday 13 April 2026 00:30:20 +0000 (0:00:00.451) 0:03:46.932 ********** 2026-04-13 00:30:24.129393 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:24.129403 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:24.129414 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:24.129425 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:24.129436 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:24.129447 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:24.129457 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:24.129468 | orchestrator | 2026-04-13 00:30:24.129479 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-13 00:30:24.129490 | orchestrator | Monday 13 April 2026 00:30:21 +0000 (0:00:01.305) 0:03:48.238 ********** 2026-04-13 00:30:24.129501 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:24.129511 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:24.129522 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:24.129532 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:24.129543 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:24.129553 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:24.129583 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:24.129594 | orchestrator | 2026-04-13 00:30:24.129605 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-13 00:30:24.129616 | orchestrator | Monday 13 April 2026 00:30:22 +0000 (0:00:00.627) 0:03:48.865 ********** 2026-04-13 00:30:24.129627 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:24.129638 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:24.129649 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:24.129660 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:24.129670 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:24.129681 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:24.129691 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:24.129702 | orchestrator | 2026-04-13 00:30:24.129713 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-13 00:30:24.129724 | orchestrator | Monday 13 April 2026 00:30:22 +0000 (0:00:00.633) 0:03:49.499 ********** 2026-04-13 00:30:24.129735 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:24.129745 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:24.129756 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:24.129767 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:24.129778 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:24.129788 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:24.129799 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:24.129810 | orchestrator | 2026-04-13 00:30:24.129821 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-13 00:30:24.129831 | orchestrator | Monday 13 April 2026 00:30:23 +0000 (0:00:00.616) 0:03:50.116 ********** 2026-04-13 00:30:24.129851 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038724.081166, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:24.129873 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038750.2170107, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:24.129886 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038721.562736, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:24.129920 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038734.0016155, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750340 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038753.3117673, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750456 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038751.2156918, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750473 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038752.3853877, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750503 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750535 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750547 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750558 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750596 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750609 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750620 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:29.750632 | orchestrator | 2026-04-13 00:30:29.750645 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-13 00:30:29.750665 | orchestrator | Monday 13 April 2026 00:30:24 +0000 (0:00:00.986) 0:03:51.102 ********** 2026-04-13 00:30:29.750684 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:29.750703 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:29.750734 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:29.750754 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:29.750772 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:29.750790 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:29.750809 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:29.750821 | orchestrator | 2026-04-13 00:30:29.750835 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-13 00:30:29.750847 | orchestrator | Monday 13 April 2026 00:30:25 +0000 (0:00:01.172) 0:03:52.275 ********** 2026-04-13 00:30:29.750859 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:29.750872 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:29.750890 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:29.750903 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:29.750915 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:29.750928 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:29.750940 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:29.750952 | orchestrator | 2026-04-13 00:30:29.750965 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-13 00:30:29.750977 | orchestrator | Monday 13 April 2026 00:30:26 +0000 (0:00:01.172) 0:03:53.447 ********** 2026-04-13 00:30:29.750990 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:29.751002 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:29.751014 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:29.751024 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:29.751035 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:29.751045 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:29.751056 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:29.751066 | orchestrator | 2026-04-13 00:30:29.751077 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-13 00:30:29.751089 | orchestrator | Monday 13 April 2026 00:30:28 +0000 (0:00:01.315) 0:03:54.763 ********** 2026-04-13 00:30:29.751100 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:29.751110 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:29.751121 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:29.751131 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:29.751142 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:29.751153 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:29.751163 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:29.751174 | orchestrator | 2026-04-13 00:30:29.751185 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-13 00:30:29.751195 | orchestrator | Monday 13 April 2026 00:30:28 +0000 (0:00:00.315) 0:03:55.078 ********** 2026-04-13 00:30:29.751206 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:29.751218 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:29.751254 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:29.751267 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:29.751278 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:29.751289 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:29.751299 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:29.751310 | orchestrator | 2026-04-13 00:30:29.751321 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-13 00:30:29.751331 | orchestrator | Monday 13 April 2026 00:30:29 +0000 (0:00:00.776) 0:03:55.855 ********** 2026-04-13 00:30:29.751345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:30:29.751358 | orchestrator | 2026-04-13 00:30:29.751369 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-13 00:30:29.751388 | orchestrator | Monday 13 April 2026 00:30:29 +0000 (0:00:00.416) 0:03:56.271 ********** 2026-04-13 00:31:49.581649 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.581750 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:49.581765 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:49.581797 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:49.581808 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:49.581818 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:49.581827 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:49.581838 | orchestrator | 2026-04-13 00:31:49.581849 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-13 00:31:49.581860 | orchestrator | Monday 13 April 2026 00:30:38 +0000 (0:00:08.500) 0:04:04.772 ********** 2026-04-13 00:31:49.581869 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.581879 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:49.581889 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:49.581898 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:49.581908 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:49.581917 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:49.581926 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:49.581936 | orchestrator | 2026-04-13 00:31:49.581945 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-13 00:31:49.581955 | orchestrator | Monday 13 April 2026 00:30:39 +0000 (0:00:01.255) 0:04:06.027 ********** 2026-04-13 00:31:49.581965 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.581974 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:49.581984 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:49.581993 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:49.582003 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:49.582012 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:49.582088 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:49.582110 | orchestrator | 2026-04-13 00:31:49.582120 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-13 00:31:49.582130 | orchestrator | Monday 13 April 2026 00:30:40 +0000 (0:00:01.033) 0:04:07.060 ********** 2026-04-13 00:31:49.582139 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.582149 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:49.582158 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:49.582168 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:49.582196 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:49.582207 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:49.582217 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:49.582228 | orchestrator | 2026-04-13 00:31:49.582239 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-13 00:31:49.582250 | orchestrator | Monday 13 April 2026 00:30:40 +0000 (0:00:00.328) 0:04:07.389 ********** 2026-04-13 00:31:49.582261 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.582272 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:49.582283 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:49.582293 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:49.582304 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:49.582314 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:49.582323 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:49.582333 | orchestrator | 2026-04-13 00:31:49.582342 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-13 00:31:49.582352 | orchestrator | Monday 13 April 2026 00:30:41 +0000 (0:00:00.292) 0:04:07.682 ********** 2026-04-13 00:31:49.582361 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.582371 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:49.582380 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:49.582389 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:49.582399 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:49.582408 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:49.582419 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:49.582434 | orchestrator | 2026-04-13 00:31:49.582451 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-13 00:31:49.582466 | orchestrator | Monday 13 April 2026 00:30:41 +0000 (0:00:00.307) 0:04:07.989 ********** 2026-04-13 00:31:49.582479 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.582489 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:49.582498 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:49.582517 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:49.582526 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:49.582536 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:49.582545 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:49.582554 | orchestrator | 2026-04-13 00:31:49.582564 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-13 00:31:49.582574 | orchestrator | Monday 13 April 2026 00:30:47 +0000 (0:00:05.636) 0:04:13.626 ********** 2026-04-13 00:31:49.582585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:31:49.582598 | orchestrator | 2026-04-13 00:31:49.582607 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-13 00:31:49.582617 | orchestrator | Monday 13 April 2026 00:30:47 +0000 (0:00:00.421) 0:04:14.048 ********** 2026-04-13 00:31:49.582626 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-13 00:31:49.582636 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-13 00:31:49.582647 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:31:49.582656 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-13 00:31:49.582666 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-13 00:31:49.582675 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-13 00:31:49.582685 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-13 00:31:49.582694 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:31:49.582704 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-13 00:31:49.582713 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-13 00:31:49.582723 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:31:49.582732 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-13 00:31:49.582742 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:31:49.582751 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-13 00:31:49.582761 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-13 00:31:49.582770 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-13 00:31:49.582795 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:31:49.582806 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:31:49.582816 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-13 00:31:49.582825 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-13 00:31:49.582835 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:31:49.582844 | orchestrator | 2026-04-13 00:31:49.582853 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-13 00:31:49.582863 | orchestrator | Monday 13 April 2026 00:30:47 +0000 (0:00:00.325) 0:04:14.374 ********** 2026-04-13 00:31:49.582873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:31:49.582882 | orchestrator | 2026-04-13 00:31:49.582892 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-13 00:31:49.582901 | orchestrator | Monday 13 April 2026 00:30:48 +0000 (0:00:00.535) 0:04:14.909 ********** 2026-04-13 00:31:49.582910 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-13 00:31:49.582920 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-13 00:31:49.582930 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:31:49.582939 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-13 00:31:49.582964 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:31:49.582974 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-13 00:31:49.582990 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:31:49.582999 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:31:49.583009 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-13 00:31:49.583018 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-13 00:31:49.583027 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:31:49.583036 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:31:49.583046 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-13 00:31:49.583055 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:31:49.583065 | orchestrator | 2026-04-13 00:31:49.583074 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-13 00:31:49.583083 | orchestrator | Monday 13 April 2026 00:30:48 +0000 (0:00:00.303) 0:04:15.212 ********** 2026-04-13 00:31:49.583093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:31:49.583103 | orchestrator | 2026-04-13 00:31:49.583112 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-13 00:31:49.583126 | orchestrator | Monday 13 April 2026 00:30:49 +0000 (0:00:00.419) 0:04:15.632 ********** 2026-04-13 00:31:49.583135 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:49.583145 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:49.583154 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:49.583164 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:49.583201 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:49.583211 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:49.583220 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:49.583230 | orchestrator | 2026-04-13 00:31:49.583239 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-13 00:31:49.583249 | orchestrator | Monday 13 April 2026 00:31:25 +0000 (0:00:36.065) 0:04:51.698 ********** 2026-04-13 00:31:49.583259 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:49.583268 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:49.583277 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:49.583287 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:49.583296 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:49.583306 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:49.583315 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:49.583324 | orchestrator | 2026-04-13 00:31:49.583334 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-13 00:31:49.583343 | orchestrator | Monday 13 April 2026 00:31:33 +0000 (0:00:08.354) 0:05:00.052 ********** 2026-04-13 00:31:49.583353 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:49.583362 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:49.583372 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:49.583381 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:49.583391 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:49.583400 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:49.583409 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:49.583419 | orchestrator | 2026-04-13 00:31:49.583428 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-13 00:31:49.583438 | orchestrator | Monday 13 April 2026 00:31:41 +0000 (0:00:08.043) 0:05:08.096 ********** 2026-04-13 00:31:49.583447 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:49.583457 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:49.583466 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:49.583476 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:49.583485 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:49.583494 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:49.583504 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:49.583513 | orchestrator | 2026-04-13 00:31:49.583523 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-13 00:31:49.583539 | orchestrator | Monday 13 April 2026 00:31:43 +0000 (0:00:01.806) 0:05:09.903 ********** 2026-04-13 00:31:49.583548 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:49.583558 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:49.583567 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:49.583577 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:49.583586 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:49.583596 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:49.583605 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:49.583614 | orchestrator | 2026-04-13 00:31:49.583630 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-13 00:32:00.976739 | orchestrator | Monday 13 April 2026 00:31:49 +0000 (0:00:06.198) 0:05:16.101 ********** 2026-04-13 00:32:00.976840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:32:00.976860 | orchestrator | 2026-04-13 00:32:00.976879 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-13 00:32:00.976896 | orchestrator | Monday 13 April 2026 00:31:50 +0000 (0:00:00.430) 0:05:16.531 ********** 2026-04-13 00:32:00.976914 | orchestrator | changed: [testbed-manager] 2026-04-13 00:32:00.976933 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:32:00.976952 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:32:00.976973 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:32:00.976994 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:32:00.977015 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:32:00.977028 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:32:00.977039 | orchestrator | 2026-04-13 00:32:00.977050 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-13 00:32:00.977061 | orchestrator | Monday 13 April 2026 00:31:50 +0000 (0:00:00.738) 0:05:17.269 ********** 2026-04-13 00:32:00.977072 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:00.977084 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:00.977095 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:00.977106 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:00.977116 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:00.977127 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:00.977138 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:00.977149 | orchestrator | 2026-04-13 00:32:00.977226 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-13 00:32:00.977241 | orchestrator | Monday 13 April 2026 00:31:52 +0000 (0:00:01.680) 0:05:18.950 ********** 2026-04-13 00:32:00.977253 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:32:00.977264 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:32:00.977275 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:32:00.977288 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:32:00.977301 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:32:00.977314 | orchestrator | changed: [testbed-manager] 2026-04-13 00:32:00.977327 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:32:00.977340 | orchestrator | 2026-04-13 00:32:00.977353 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-13 00:32:00.977366 | orchestrator | Monday 13 April 2026 00:31:53 +0000 (0:00:00.807) 0:05:19.757 ********** 2026-04-13 00:32:00.977379 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:00.977391 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:00.977404 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:00.977417 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:00.977429 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:00.977441 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:00.977455 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:00.977466 | orchestrator | 2026-04-13 00:32:00.977477 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-13 00:32:00.977526 | orchestrator | Monday 13 April 2026 00:31:53 +0000 (0:00:00.255) 0:05:20.013 ********** 2026-04-13 00:32:00.977538 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:00.977549 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:00.977560 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:00.977571 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:00.977582 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:00.977592 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:00.977603 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:00.977614 | orchestrator | 2026-04-13 00:32:00.977633 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-13 00:32:00.977653 | orchestrator | Monday 13 April 2026 00:31:53 +0000 (0:00:00.388) 0:05:20.402 ********** 2026-04-13 00:32:00.977672 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:00.977692 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:00.977711 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:00.977730 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:00.977748 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:00.977768 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:00.977789 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:00.977808 | orchestrator | 2026-04-13 00:32:00.977827 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-13 00:32:00.977839 | orchestrator | Monday 13 April 2026 00:31:54 +0000 (0:00:00.430) 0:05:20.832 ********** 2026-04-13 00:32:00.977852 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:00.977871 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:00.977889 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:00.977907 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:00.977926 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:00.977945 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:00.977963 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:00.977981 | orchestrator | 2026-04-13 00:32:00.977999 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-13 00:32:00.978093 | orchestrator | Monday 13 April 2026 00:31:54 +0000 (0:00:00.259) 0:05:21.092 ********** 2026-04-13 00:32:00.978117 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:00.978137 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:00.978156 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:00.978197 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:00.978209 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:00.978219 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:00.978230 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:00.978241 | orchestrator | 2026-04-13 00:32:00.978252 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-13 00:32:00.978263 | orchestrator | Monday 13 April 2026 00:31:54 +0000 (0:00:00.338) 0:05:21.430 ********** 2026-04-13 00:32:00.978274 | orchestrator | ok: [testbed-manager] =>  2026-04-13 00:32:00.978285 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:00.978296 | orchestrator | ok: [testbed-node-0] =>  2026-04-13 00:32:00.978307 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:00.978318 | orchestrator | ok: [testbed-node-1] =>  2026-04-13 00:32:00.978329 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:00.978339 | orchestrator | ok: [testbed-node-2] =>  2026-04-13 00:32:00.978350 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:00.978382 | orchestrator | ok: [testbed-node-3] =>  2026-04-13 00:32:00.978394 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:00.978405 | orchestrator | ok: [testbed-node-4] =>  2026-04-13 00:32:00.978415 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:00.978426 | orchestrator | ok: [testbed-node-5] =>  2026-04-13 00:32:00.978437 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:00.978447 | orchestrator | 2026-04-13 00:32:00.978458 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-13 00:32:00.978469 | orchestrator | Monday 13 April 2026 00:31:55 +0000 (0:00:00.308) 0:05:21.738 ********** 2026-04-13 00:32:00.978493 | orchestrator | ok: [testbed-manager] =>  2026-04-13 00:32:00.978505 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:00.978515 | orchestrator | ok: [testbed-node-0] =>  2026-04-13 00:32:00.978526 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:00.978537 | orchestrator | ok: [testbed-node-1] =>  2026-04-13 00:32:00.978547 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:00.978558 | orchestrator | ok: [testbed-node-2] =>  2026-04-13 00:32:00.978569 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:00.978579 | orchestrator | ok: [testbed-node-3] =>  2026-04-13 00:32:00.978590 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:00.978601 | orchestrator | ok: [testbed-node-4] =>  2026-04-13 00:32:00.978611 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:00.978622 | orchestrator | ok: [testbed-node-5] =>  2026-04-13 00:32:00.978633 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:00.978644 | orchestrator | 2026-04-13 00:32:00.978655 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-13 00:32:00.978666 | orchestrator | Monday 13 April 2026 00:31:55 +0000 (0:00:00.290) 0:05:22.029 ********** 2026-04-13 00:32:00.978677 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:00.978688 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:00.978698 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:00.978709 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:00.978719 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:00.978730 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:00.978741 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:00.978752 | orchestrator | 2026-04-13 00:32:00.978763 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-13 00:32:00.978774 | orchestrator | Monday 13 April 2026 00:31:55 +0000 (0:00:00.302) 0:05:22.332 ********** 2026-04-13 00:32:00.978785 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:00.978795 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:00.978806 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:00.978817 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:00.978828 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:00.978838 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:00.978850 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:00.978869 | orchestrator | 2026-04-13 00:32:00.978887 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-13 00:32:00.978905 | orchestrator | Monday 13 April 2026 00:31:56 +0000 (0:00:00.283) 0:05:22.615 ********** 2026-04-13 00:32:00.978950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:32:00.978972 | orchestrator | 2026-04-13 00:32:00.978992 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-13 00:32:00.979010 | orchestrator | Monday 13 April 2026 00:31:56 +0000 (0:00:00.482) 0:05:23.097 ********** 2026-04-13 00:32:00.979024 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:00.979035 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:00.979046 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:00.979057 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:00.979082 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:00.979105 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:00.979117 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:00.979128 | orchestrator | 2026-04-13 00:32:00.979138 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-13 00:32:00.979149 | orchestrator | Monday 13 April 2026 00:31:57 +0000 (0:00:00.832) 0:05:23.929 ********** 2026-04-13 00:32:00.979181 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:00.979197 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:00.979208 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:00.979219 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:00.979238 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:00.979249 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:00.979260 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:00.979271 | orchestrator | 2026-04-13 00:32:00.979282 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-13 00:32:00.979294 | orchestrator | Monday 13 April 2026 00:32:00 +0000 (0:00:03.146) 0:05:27.075 ********** 2026-04-13 00:32:00.979305 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-13 00:32:00.979316 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-13 00:32:00.979327 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-13 00:32:00.979338 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:00.979349 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-13 00:32:00.979359 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-13 00:32:00.979370 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-13 00:32:00.979381 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:00.979392 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-13 00:32:00.979403 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-13 00:32:00.979413 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-13 00:32:00.979424 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:00.979442 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-13 00:32:00.979459 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-13 00:32:00.979478 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-13 00:32:00.979497 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-13 00:32:00.979528 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-13 00:33:04.560281 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-13 00:33:04.560392 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:04.560408 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-13 00:33:04.560421 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-13 00:33:04.560432 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-13 00:33:04.560443 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:04.560454 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:04.560465 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-13 00:33:04.560476 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-13 00:33:04.560487 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-13 00:33:04.560497 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:04.560509 | orchestrator | 2026-04-13 00:33:04.560520 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-13 00:33:04.560533 | orchestrator | Monday 13 April 2026 00:32:01 +0000 (0:00:00.669) 0:05:27.744 ********** 2026-04-13 00:33:04.560543 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.560555 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.560566 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.560576 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.560587 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.560597 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.560608 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.560618 | orchestrator | 2026-04-13 00:33:04.560629 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-13 00:33:04.560640 | orchestrator | Monday 13 April 2026 00:32:07 +0000 (0:00:06.527) 0:05:34.272 ********** 2026-04-13 00:33:04.560651 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.560661 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.560672 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.560683 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.560693 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.560727 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.560740 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.560752 | orchestrator | 2026-04-13 00:33:04.560766 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-13 00:33:04.560778 | orchestrator | Monday 13 April 2026 00:32:08 +0000 (0:00:01.066) 0:05:35.338 ********** 2026-04-13 00:33:04.560791 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.560803 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.560816 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.560829 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.560840 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.560851 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.560861 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.560872 | orchestrator | 2026-04-13 00:33:04.560882 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-13 00:33:04.560894 | orchestrator | Monday 13 April 2026 00:32:17 +0000 (0:00:08.585) 0:05:43.924 ********** 2026-04-13 00:33:04.560905 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:04.560930 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.560941 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.560952 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.560962 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.560973 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.560983 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.560994 | orchestrator | 2026-04-13 00:33:04.561005 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-13 00:33:04.561015 | orchestrator | Monday 13 April 2026 00:32:21 +0000 (0:00:03.910) 0:05:47.834 ********** 2026-04-13 00:33:04.561026 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.561037 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.561047 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.561058 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.561068 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.561078 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.561089 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.561099 | orchestrator | 2026-04-13 00:33:04.561110 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-13 00:33:04.561152 | orchestrator | Monday 13 April 2026 00:32:22 +0000 (0:00:01.305) 0:05:49.139 ********** 2026-04-13 00:33:04.561164 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.561175 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.561185 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.561196 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.561206 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.561217 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.561228 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.561238 | orchestrator | 2026-04-13 00:33:04.561249 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-13 00:33:04.561260 | orchestrator | Monday 13 April 2026 00:32:23 +0000 (0:00:01.351) 0:05:50.491 ********** 2026-04-13 00:33:04.561270 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:04.561282 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:04.561292 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:04.561303 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:04.561314 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:04.561324 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:04.561335 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:04.561345 | orchestrator | 2026-04-13 00:33:04.561356 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-13 00:33:04.561367 | orchestrator | Monday 13 April 2026 00:32:24 +0000 (0:00:00.587) 0:05:51.078 ********** 2026-04-13 00:33:04.561378 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.561389 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.561400 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.561419 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.561430 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.561440 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.561451 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.561462 | orchestrator | 2026-04-13 00:33:04.561473 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-13 00:33:04.561502 | orchestrator | Monday 13 April 2026 00:32:35 +0000 (0:00:10.455) 0:06:01.534 ********** 2026-04-13 00:33:04.561514 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:04.561525 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.561535 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.561546 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.561556 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.561567 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.561577 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.561588 | orchestrator | 2026-04-13 00:33:04.561599 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-13 00:33:04.561610 | orchestrator | Monday 13 April 2026 00:32:36 +0000 (0:00:01.134) 0:06:02.669 ********** 2026-04-13 00:33:04.561620 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.561631 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.561642 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.561652 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.561662 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.561673 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.561683 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.561694 | orchestrator | 2026-04-13 00:33:04.561705 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-13 00:33:04.561715 | orchestrator | Monday 13 April 2026 00:32:46 +0000 (0:00:09.874) 0:06:12.543 ********** 2026-04-13 00:33:04.561726 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.561737 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.561747 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.561758 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.561768 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.561779 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.561789 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.561800 | orchestrator | 2026-04-13 00:33:04.561810 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-13 00:33:04.561821 | orchestrator | Monday 13 April 2026 00:32:57 +0000 (0:00:11.712) 0:06:24.256 ********** 2026-04-13 00:33:04.561832 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-13 00:33:04.561843 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-13 00:33:04.561854 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-13 00:33:04.561864 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-13 00:33:04.561875 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-13 00:33:04.561886 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-13 00:33:04.561896 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-13 00:33:04.561907 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-13 00:33:04.561918 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-13 00:33:04.561929 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-13 00:33:04.561939 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-13 00:33:04.561950 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-13 00:33:04.561960 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-13 00:33:04.561971 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-13 00:33:04.562165 | orchestrator | 2026-04-13 00:33:04.562178 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-13 00:33:04.562189 | orchestrator | Monday 13 April 2026 00:32:59 +0000 (0:00:01.333) 0:06:25.590 ********** 2026-04-13 00:33:04.562218 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:04.562229 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:04.562240 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:04.562250 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:04.562261 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:04.562271 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:04.562282 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:04.562292 | orchestrator | 2026-04-13 00:33:04.562303 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-13 00:33:04.562313 | orchestrator | Monday 13 April 2026 00:32:59 +0000 (0:00:00.729) 0:06:26.319 ********** 2026-04-13 00:33:04.562324 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:04.562335 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:04.562345 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:04.562355 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:04.562366 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:04.562376 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:04.562387 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:04.562397 | orchestrator | 2026-04-13 00:33:04.562408 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-13 00:33:04.562420 | orchestrator | Monday 13 April 2026 00:33:03 +0000 (0:00:03.943) 0:06:30.262 ********** 2026-04-13 00:33:04.562431 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:04.562442 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:04.562452 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:04.562463 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:04.562473 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:04.562484 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:04.562494 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:04.562505 | orchestrator | 2026-04-13 00:33:04.562557 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-13 00:33:04.562570 | orchestrator | Monday 13 April 2026 00:33:04 +0000 (0:00:00.545) 0:06:30.808 ********** 2026-04-13 00:33:04.562580 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-13 00:33:04.562591 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-13 00:33:04.562602 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:04.562612 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-13 00:33:04.562623 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-13 00:33:04.562633 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:04.562644 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-13 00:33:04.562654 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-13 00:33:04.562665 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:04.562687 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-13 00:33:23.937934 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-13 00:33:23.938201 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:23.938239 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-13 00:33:23.938260 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-13 00:33:23.938279 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:23.938298 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-13 00:33:23.938314 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-13 00:33:23.938325 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:23.938336 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-13 00:33:23.938346 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-13 00:33:23.938357 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:23.938368 | orchestrator | 2026-04-13 00:33:23.938380 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-13 00:33:23.938418 | orchestrator | Monday 13 April 2026 00:33:04 +0000 (0:00:00.580) 0:06:31.388 ********** 2026-04-13 00:33:23.938430 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:23.938441 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:23.938452 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:23.938464 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:23.938477 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:23.938488 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:23.938501 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:23.938513 | orchestrator | 2026-04-13 00:33:23.938526 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-13 00:33:23.938538 | orchestrator | Monday 13 April 2026 00:33:05 +0000 (0:00:00.525) 0:06:31.914 ********** 2026-04-13 00:33:23.938550 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:23.938562 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:23.938574 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:23.938587 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:23.938597 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:23.938608 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:23.938619 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:23.938630 | orchestrator | 2026-04-13 00:33:23.938641 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-13 00:33:23.938659 | orchestrator | Monday 13 April 2026 00:33:06 +0000 (0:00:00.736) 0:06:32.650 ********** 2026-04-13 00:33:23.938684 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:23.938704 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:23.938720 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:23.938739 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:23.938755 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:23.938771 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:23.938789 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:23.938805 | orchestrator | 2026-04-13 00:33:23.938822 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-13 00:33:23.938860 | orchestrator | Monday 13 April 2026 00:33:06 +0000 (0:00:00.517) 0:06:33.167 ********** 2026-04-13 00:33:23.938878 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.938895 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:23.938914 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:23.938932 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:23.938949 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:23.938966 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:23.938983 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:23.939001 | orchestrator | 2026-04-13 00:33:23.939020 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-13 00:33:23.939038 | orchestrator | Monday 13 April 2026 00:33:08 +0000 (0:00:01.696) 0:06:34.864 ********** 2026-04-13 00:33:23.939057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:23.939076 | orchestrator | 2026-04-13 00:33:23.939087 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-13 00:33:23.939128 | orchestrator | Monday 13 April 2026 00:33:09 +0000 (0:00:00.885) 0:06:35.750 ********** 2026-04-13 00:33:23.939141 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.939152 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:23.939162 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:23.939173 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:23.939184 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:23.939195 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:23.939205 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:23.939216 | orchestrator | 2026-04-13 00:33:23.939227 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-13 00:33:23.939251 | orchestrator | Monday 13 April 2026 00:33:10 +0000 (0:00:01.066) 0:06:36.816 ********** 2026-04-13 00:33:23.939265 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.939285 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:23.939304 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:23.939322 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:23.939340 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:23.939355 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:23.939372 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:23.939389 | orchestrator | 2026-04-13 00:33:23.939405 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-13 00:33:23.939423 | orchestrator | Monday 13 April 2026 00:33:11 +0000 (0:00:00.934) 0:06:37.750 ********** 2026-04-13 00:33:23.939441 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.939460 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:23.939479 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:23.939496 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:23.939516 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:23.939537 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:23.939555 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:23.939576 | orchestrator | 2026-04-13 00:33:23.939595 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-13 00:33:23.939638 | orchestrator | Monday 13 April 2026 00:33:12 +0000 (0:00:01.320) 0:06:39.071 ********** 2026-04-13 00:33:23.939651 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:23.939661 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:23.939672 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:23.939683 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:23.939693 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:23.939703 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:23.939714 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:23.939724 | orchestrator | 2026-04-13 00:33:23.939735 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-13 00:33:23.939746 | orchestrator | Monday 13 April 2026 00:33:13 +0000 (0:00:01.387) 0:06:40.458 ********** 2026-04-13 00:33:23.939757 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.939767 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:23.939778 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:23.939789 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:23.939799 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:23.939810 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:23.939820 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:23.939830 | orchestrator | 2026-04-13 00:33:23.939841 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-13 00:33:23.939852 | orchestrator | Monday 13 April 2026 00:33:15 +0000 (0:00:01.496) 0:06:41.955 ********** 2026-04-13 00:33:23.939863 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:23.939873 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:23.939884 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:23.939894 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:23.939905 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:23.939915 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:23.939926 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:23.939937 | orchestrator | 2026-04-13 00:33:23.939947 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-13 00:33:23.939958 | orchestrator | Monday 13 April 2026 00:33:16 +0000 (0:00:01.420) 0:06:43.375 ********** 2026-04-13 00:33:23.939969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:23.939980 | orchestrator | 2026-04-13 00:33:23.939991 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-13 00:33:23.940017 | orchestrator | Monday 13 April 2026 00:33:17 +0000 (0:00:00.927) 0:06:44.303 ********** 2026-04-13 00:33:23.940028 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.940039 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:23.940049 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:23.940060 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:23.940070 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:23.940081 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:23.940091 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:23.940144 | orchestrator | 2026-04-13 00:33:23.940164 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-13 00:33:23.940182 | orchestrator | Monday 13 April 2026 00:33:19 +0000 (0:00:01.372) 0:06:45.676 ********** 2026-04-13 00:33:23.940198 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.940208 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:23.940219 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:23.940229 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:23.940240 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:23.940251 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:23.940261 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:23.940271 | orchestrator | 2026-04-13 00:33:23.940282 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-13 00:33:23.940293 | orchestrator | Monday 13 April 2026 00:33:20 +0000 (0:00:01.354) 0:06:47.031 ********** 2026-04-13 00:33:23.940304 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.940314 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:23.940325 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:23.940335 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:23.940346 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:23.940357 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:23.940367 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:23.940378 | orchestrator | 2026-04-13 00:33:23.940389 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-13 00:33:23.940399 | orchestrator | Monday 13 April 2026 00:33:21 +0000 (0:00:01.123) 0:06:48.154 ********** 2026-04-13 00:33:23.940410 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:23.940420 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:23.940431 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:23.940441 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:23.940452 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:23.940462 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:23.940472 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:23.940483 | orchestrator | 2026-04-13 00:33:23.940494 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-13 00:33:23.940504 | orchestrator | Monday 13 April 2026 00:33:22 +0000 (0:00:01.116) 0:06:49.271 ********** 2026-04-13 00:33:23.940515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:23.940526 | orchestrator | 2026-04-13 00:33:23.940536 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:23.940547 | orchestrator | Monday 13 April 2026 00:33:23 +0000 (0:00:00.887) 0:06:50.159 ********** 2026-04-13 00:33:23.940558 | orchestrator | 2026-04-13 00:33:23.940568 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:23.940579 | orchestrator | Monday 13 April 2026 00:33:23 +0000 (0:00:00.212) 0:06:50.372 ********** 2026-04-13 00:33:23.940590 | orchestrator | 2026-04-13 00:33:23.940600 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:23.940611 | orchestrator | Monday 13 April 2026 00:33:23 +0000 (0:00:00.040) 0:06:50.413 ********** 2026-04-13 00:33:23.940622 | orchestrator | 2026-04-13 00:33:23.940632 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:23.940651 | orchestrator | Monday 13 April 2026 00:33:23 +0000 (0:00:00.041) 0:06:50.454 ********** 2026-04-13 00:33:51.095837 | orchestrator | 2026-04-13 00:33:51.095976 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:51.095993 | orchestrator | Monday 13 April 2026 00:33:23 +0000 (0:00:00.050) 0:06:50.504 ********** 2026-04-13 00:33:51.096005 | orchestrator | 2026-04-13 00:33:51.096016 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:51.096028 | orchestrator | Monday 13 April 2026 00:33:24 +0000 (0:00:00.040) 0:06:50.545 ********** 2026-04-13 00:33:51.096038 | orchestrator | 2026-04-13 00:33:51.096050 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:51.096061 | orchestrator | Monday 13 April 2026 00:33:24 +0000 (0:00:00.041) 0:06:50.586 ********** 2026-04-13 00:33:51.096071 | orchestrator | 2026-04-13 00:33:51.096116 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-13 00:33:51.096127 | orchestrator | Monday 13 April 2026 00:33:24 +0000 (0:00:00.054) 0:06:50.641 ********** 2026-04-13 00:33:51.096138 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:51.096150 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:51.096161 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:51.096171 | orchestrator | 2026-04-13 00:33:51.096182 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-13 00:33:51.096193 | orchestrator | Monday 13 April 2026 00:33:25 +0000 (0:00:01.219) 0:06:51.860 ********** 2026-04-13 00:33:51.096204 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:51.096216 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:51.096227 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:51.096238 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:51.096248 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:51.096260 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:51.096271 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:51.096282 | orchestrator | 2026-04-13 00:33:51.096293 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-13 00:33:51.096304 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:01.326) 0:06:53.187 ********** 2026-04-13 00:33:51.096314 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:51.096325 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:51.096336 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:51.096346 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:51.096357 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:51.096368 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:51.096382 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:51.096395 | orchestrator | 2026-04-13 00:33:51.096407 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-13 00:33:51.096420 | orchestrator | Monday 13 April 2026 00:33:27 +0000 (0:00:01.195) 0:06:54.383 ********** 2026-04-13 00:33:51.096432 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:51.096445 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:51.096457 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:51.096470 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:51.096482 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:51.096495 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:51.096507 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:51.096520 | orchestrator | 2026-04-13 00:33:51.096548 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-13 00:33:51.096561 | orchestrator | Monday 13 April 2026 00:33:30 +0000 (0:00:02.355) 0:06:56.738 ********** 2026-04-13 00:33:51.096575 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:51.096588 | orchestrator | 2026-04-13 00:33:51.096603 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-13 00:33:51.096616 | orchestrator | Monday 13 April 2026 00:33:30 +0000 (0:00:00.106) 0:06:56.845 ********** 2026-04-13 00:33:51.096629 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:51.096643 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:51.096656 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:51.096671 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:51.096692 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:51.096706 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:51.096719 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:51.096733 | orchestrator | 2026-04-13 00:33:51.096744 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-13 00:33:51.096757 | orchestrator | Monday 13 April 2026 00:33:31 +0000 (0:00:01.297) 0:06:58.142 ********** 2026-04-13 00:33:51.096769 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:51.096780 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:51.096792 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:51.096803 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:51.096815 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:51.096826 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:51.096837 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:51.096849 | orchestrator | 2026-04-13 00:33:51.096861 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-13 00:33:51.096872 | orchestrator | Monday 13 April 2026 00:33:32 +0000 (0:00:00.534) 0:06:58.677 ********** 2026-04-13 00:33:51.096884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:51.096898 | orchestrator | 2026-04-13 00:33:51.096910 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-13 00:33:51.096922 | orchestrator | Monday 13 April 2026 00:33:33 +0000 (0:00:00.967) 0:06:59.644 ********** 2026-04-13 00:33:51.096934 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:51.096945 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:51.096957 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:51.096968 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:51.096980 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:51.096991 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:51.097003 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:51.097014 | orchestrator | 2026-04-13 00:33:51.097026 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-13 00:33:51.097038 | orchestrator | Monday 13 April 2026 00:33:34 +0000 (0:00:01.086) 0:07:00.731 ********** 2026-04-13 00:33:51.097049 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-13 00:33:51.097098 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-13 00:33:51.097111 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-13 00:33:51.097122 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-13 00:33:51.097132 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-13 00:33:51.097143 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-13 00:33:51.097154 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-13 00:33:51.097164 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-13 00:33:51.097175 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-13 00:33:51.097186 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-13 00:33:51.097196 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-13 00:33:51.097207 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-13 00:33:51.097218 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-13 00:33:51.097228 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-13 00:33:51.097239 | orchestrator | 2026-04-13 00:33:51.097250 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-13 00:33:51.097260 | orchestrator | Monday 13 April 2026 00:33:36 +0000 (0:00:02.553) 0:07:03.284 ********** 2026-04-13 00:33:51.097271 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:51.097282 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:51.097292 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:51.097311 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:51.097322 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:51.097333 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:51.097344 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:51.097354 | orchestrator | 2026-04-13 00:33:51.097365 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-13 00:33:51.097376 | orchestrator | Monday 13 April 2026 00:33:37 +0000 (0:00:00.607) 0:07:03.892 ********** 2026-04-13 00:33:51.097389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:51.097402 | orchestrator | 2026-04-13 00:33:51.097413 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-13 00:33:51.097424 | orchestrator | Monday 13 April 2026 00:33:38 +0000 (0:00:01.063) 0:07:04.955 ********** 2026-04-13 00:33:51.097434 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:51.097445 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:51.097455 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:51.097466 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:51.097477 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:51.097487 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:51.097503 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:51.097514 | orchestrator | 2026-04-13 00:33:51.097525 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-13 00:33:51.097536 | orchestrator | Monday 13 April 2026 00:33:39 +0000 (0:00:00.873) 0:07:05.829 ********** 2026-04-13 00:33:51.097547 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:51.097557 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:51.097568 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:51.097578 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:51.097589 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:51.097599 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:51.097610 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:51.097620 | orchestrator | 2026-04-13 00:33:51.097631 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-13 00:33:51.097642 | orchestrator | Monday 13 April 2026 00:33:40 +0000 (0:00:00.832) 0:07:06.662 ********** 2026-04-13 00:33:51.097653 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:51.097663 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:51.097674 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:51.097685 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:51.097696 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:51.097707 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:51.097717 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:51.097728 | orchestrator | 2026-04-13 00:33:51.097739 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-13 00:33:51.097750 | orchestrator | Monday 13 April 2026 00:33:40 +0000 (0:00:00.546) 0:07:07.208 ********** 2026-04-13 00:33:51.097761 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:51.097771 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:51.097782 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:51.097792 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:51.097803 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:51.097813 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:51.097824 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:51.097834 | orchestrator | 2026-04-13 00:33:51.097845 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-13 00:33:51.097856 | orchestrator | Monday 13 April 2026 00:33:42 +0000 (0:00:01.564) 0:07:08.772 ********** 2026-04-13 00:33:51.097867 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:51.097878 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:51.097888 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:51.097899 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:51.097917 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:51.097928 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:51.097938 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:51.097949 | orchestrator | 2026-04-13 00:33:51.097960 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-13 00:33:51.097970 | orchestrator | Monday 13 April 2026 00:33:42 +0000 (0:00:00.755) 0:07:09.528 ********** 2026-04-13 00:33:51.097981 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:51.097992 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:51.098002 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:51.098013 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:51.098073 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:51.098101 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:51.098120 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:24.594725 | orchestrator | 2026-04-13 00:34:24.594850 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-13 00:34:24.594867 | orchestrator | Monday 13 April 2026 00:33:51 +0000 (0:00:08.152) 0:07:17.681 ********** 2026-04-13 00:34:24.594876 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.594885 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:24.594895 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:24.594903 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:24.594911 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:24.594919 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:24.594926 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:24.594934 | orchestrator | 2026-04-13 00:34:24.594942 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-13 00:34:24.594950 | orchestrator | Monday 13 April 2026 00:33:52 +0000 (0:00:01.342) 0:07:19.024 ********** 2026-04-13 00:34:24.594958 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.594966 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:24.594973 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:24.594981 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:24.594989 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:24.594997 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:24.595005 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:24.595013 | orchestrator | 2026-04-13 00:34:24.595020 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-13 00:34:24.595028 | orchestrator | Monday 13 April 2026 00:33:54 +0000 (0:00:01.797) 0:07:20.821 ********** 2026-04-13 00:34:24.595036 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.595100 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:24.595110 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:24.595118 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:24.595126 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:24.595134 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:24.595142 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:24.595149 | orchestrator | 2026-04-13 00:34:24.595157 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-13 00:34:24.595167 | orchestrator | Monday 13 April 2026 00:33:56 +0000 (0:00:01.870) 0:07:22.692 ********** 2026-04-13 00:34:24.595181 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.595194 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.595206 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.595219 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.595232 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.595246 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.595258 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.595271 | orchestrator | 2026-04-13 00:34:24.595283 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-13 00:34:24.595296 | orchestrator | Monday 13 April 2026 00:33:57 +0000 (0:00:00.912) 0:07:23.604 ********** 2026-04-13 00:34:24.595309 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:24.595323 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:24.595366 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:24.595381 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:24.595394 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:24.595407 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:24.595419 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:24.595433 | orchestrator | 2026-04-13 00:34:24.595446 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-13 00:34:24.595458 | orchestrator | Monday 13 April 2026 00:33:58 +0000 (0:00:00.929) 0:07:24.534 ********** 2026-04-13 00:34:24.595471 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:24.595484 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:24.595496 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:24.595510 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:24.595523 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:24.595535 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:24.595547 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:24.595559 | orchestrator | 2026-04-13 00:34:24.595571 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-13 00:34:24.595584 | orchestrator | Monday 13 April 2026 00:33:58 +0000 (0:00:00.706) 0:07:25.240 ********** 2026-04-13 00:34:24.595596 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.595608 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.595620 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.595632 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.595645 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.595657 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.595669 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.595682 | orchestrator | 2026-04-13 00:34:24.595697 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-13 00:34:24.595709 | orchestrator | Monday 13 April 2026 00:33:59 +0000 (0:00:00.567) 0:07:25.808 ********** 2026-04-13 00:34:24.595721 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.595733 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.595746 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.595759 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.595773 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.595788 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.595801 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.595813 | orchestrator | 2026-04-13 00:34:24.595827 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-13 00:34:24.595841 | orchestrator | Monday 13 April 2026 00:33:59 +0000 (0:00:00.528) 0:07:26.336 ********** 2026-04-13 00:34:24.595854 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.595867 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.595881 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.595893 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.595906 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.595918 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.595930 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.595942 | orchestrator | 2026-04-13 00:34:24.595955 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-13 00:34:24.595968 | orchestrator | Monday 13 April 2026 00:34:00 +0000 (0:00:00.529) 0:07:26.865 ********** 2026-04-13 00:34:24.595980 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.595993 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.596004 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.596017 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.596030 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.596070 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.596107 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.596120 | orchestrator | 2026-04-13 00:34:24.596158 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-13 00:34:24.596173 | orchestrator | Monday 13 April 2026 00:34:05 +0000 (0:00:05.644) 0:07:32.510 ********** 2026-04-13 00:34:24.596187 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:24.596217 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:24.596231 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:24.596244 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:24.596257 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:24.596270 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:24.596283 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:24.596297 | orchestrator | 2026-04-13 00:34:24.596310 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-13 00:34:24.596324 | orchestrator | Monday 13 April 2026 00:34:06 +0000 (0:00:00.744) 0:07:33.255 ********** 2026-04-13 00:34:24.596340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:24.596355 | orchestrator | 2026-04-13 00:34:24.596370 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-13 00:34:24.596385 | orchestrator | Monday 13 April 2026 00:34:07 +0000 (0:00:00.826) 0:07:34.082 ********** 2026-04-13 00:34:24.596398 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.596411 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.596424 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.596438 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.596452 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.596467 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.596481 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.596494 | orchestrator | 2026-04-13 00:34:24.596508 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-13 00:34:24.596522 | orchestrator | Monday 13 April 2026 00:34:09 +0000 (0:00:02.216) 0:07:36.298 ********** 2026-04-13 00:34:24.596534 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.596546 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.596558 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.596570 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.596582 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.596594 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.596606 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.596618 | orchestrator | 2026-04-13 00:34:24.596630 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-13 00:34:24.596642 | orchestrator | Monday 13 April 2026 00:34:11 +0000 (0:00:01.365) 0:07:37.663 ********** 2026-04-13 00:34:24.596654 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:24.596666 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:24.596678 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:24.596690 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:24.596702 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:24.596715 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:24.596727 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:24.596738 | orchestrator | 2026-04-13 00:34:24.596761 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-13 00:34:24.596777 | orchestrator | Monday 13 April 2026 00:34:11 +0000 (0:00:00.809) 0:07:38.473 ********** 2026-04-13 00:34:24.596791 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:24.596806 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:24.596818 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:24.596831 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:24.596844 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:24.596872 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:24.596885 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:24.596898 | orchestrator | 2026-04-13 00:34:24.596912 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-13 00:34:24.596925 | orchestrator | Monday 13 April 2026 00:34:13 +0000 (0:00:01.707) 0:07:40.181 ********** 2026-04-13 00:34:24.596970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:24.596979 | orchestrator | 2026-04-13 00:34:24.596987 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-13 00:34:24.596995 | orchestrator | Monday 13 April 2026 00:34:14 +0000 (0:00:01.036) 0:07:41.217 ********** 2026-04-13 00:34:24.597002 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:24.597010 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:24.597018 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:24.597025 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:24.597033 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:24.597041 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:24.597083 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:24.597091 | orchestrator | 2026-04-13 00:34:24.597112 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-13 00:34:56.189435 | orchestrator | Monday 13 April 2026 00:34:24 +0000 (0:00:09.895) 0:07:51.113 ********** 2026-04-13 00:34:56.189572 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.189601 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.189619 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.189636 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.189655 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.189671 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.189689 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.189708 | orchestrator | 2026-04-13 00:34:56.189726 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-13 00:34:56.189744 | orchestrator | Monday 13 April 2026 00:34:26 +0000 (0:00:01.775) 0:07:52.889 ********** 2026-04-13 00:34:56.189763 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.189779 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.189796 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.189814 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.189832 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.189851 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.189868 | orchestrator | 2026-04-13 00:34:56.189886 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-13 00:34:56.189904 | orchestrator | Monday 13 April 2026 00:34:27 +0000 (0:00:01.530) 0:07:54.420 ********** 2026-04-13 00:34:56.189921 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.189941 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.189959 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.189976 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.189992 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.190112 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.190136 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.190153 | orchestrator | 2026-04-13 00:34:56.190172 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-13 00:34:56.190189 | orchestrator | 2026-04-13 00:34:56.190205 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-13 00:34:56.190221 | orchestrator | Monday 13 April 2026 00:34:29 +0000 (0:00:01.876) 0:07:56.297 ********** 2026-04-13 00:34:56.190237 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:56.190284 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:56.190301 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:56.190424 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:56.190441 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:56.190458 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:56.190475 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:56.190492 | orchestrator | 2026-04-13 00:34:56.190509 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-13 00:34:56.190527 | orchestrator | 2026-04-13 00:34:56.190543 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-13 00:34:56.190560 | orchestrator | Monday 13 April 2026 00:34:30 +0000 (0:00:00.535) 0:07:56.832 ********** 2026-04-13 00:34:56.190576 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.190593 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.190610 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.190626 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.190663 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.190680 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.190696 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.190711 | orchestrator | 2026-04-13 00:34:56.190724 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-13 00:34:56.190734 | orchestrator | Monday 13 April 2026 00:34:31 +0000 (0:00:01.400) 0:07:58.233 ********** 2026-04-13 00:34:56.190743 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.190753 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.190762 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.190772 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.190781 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.190790 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.190799 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.190809 | orchestrator | 2026-04-13 00:34:56.190818 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-13 00:34:56.190828 | orchestrator | Monday 13 April 2026 00:34:33 +0000 (0:00:01.590) 0:07:59.824 ********** 2026-04-13 00:34:56.190837 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:56.190847 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:56.190856 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:56.190866 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:56.190875 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:56.190885 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:56.190894 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:56.190903 | orchestrator | 2026-04-13 00:34:56.190913 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-13 00:34:56.190923 | orchestrator | Monday 13 April 2026 00:34:33 +0000 (0:00:00.529) 0:08:00.354 ********** 2026-04-13 00:34:56.190933 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.190945 | orchestrator | 2026-04-13 00:34:56.190954 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-13 00:34:56.190964 | orchestrator | Monday 13 April 2026 00:34:34 +0000 (0:00:00.844) 0:08:01.198 ********** 2026-04-13 00:34:56.190976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.190988 | orchestrator | 2026-04-13 00:34:56.190998 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-13 00:34:56.191007 | orchestrator | Monday 13 April 2026 00:34:35 +0000 (0:00:01.028) 0:08:02.227 ********** 2026-04-13 00:34:56.191060 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.191071 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.191081 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.191090 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.191113 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.191123 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.191132 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.191142 | orchestrator | 2026-04-13 00:34:56.191176 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-13 00:34:56.191186 | orchestrator | Monday 13 April 2026 00:34:44 +0000 (0:00:08.923) 0:08:11.150 ********** 2026-04-13 00:34:56.191196 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.191205 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.191215 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.191224 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.191233 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.191242 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.191252 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.191262 | orchestrator | 2026-04-13 00:34:56.191271 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-13 00:34:56.191281 | orchestrator | Monday 13 April 2026 00:34:45 +0000 (0:00:00.825) 0:08:11.975 ********** 2026-04-13 00:34:56.191290 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.191300 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.191309 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.191318 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.191328 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.191337 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.191346 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.191355 | orchestrator | 2026-04-13 00:34:56.191365 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-13 00:34:56.191375 | orchestrator | Monday 13 April 2026 00:34:46 +0000 (0:00:01.358) 0:08:13.334 ********** 2026-04-13 00:34:56.191384 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.191393 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.191403 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.191412 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.191421 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.191431 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.191440 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.191449 | orchestrator | 2026-04-13 00:34:56.191459 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-13 00:34:56.191469 | orchestrator | Monday 13 April 2026 00:34:48 +0000 (0:00:02.031) 0:08:15.366 ********** 2026-04-13 00:34:56.191478 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.191488 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.191497 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.191506 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.191515 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.191525 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.191534 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.191544 | orchestrator | 2026-04-13 00:34:56.191553 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-13 00:34:56.191563 | orchestrator | Monday 13 April 2026 00:34:50 +0000 (0:00:01.265) 0:08:16.632 ********** 2026-04-13 00:34:56.191572 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.191582 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.191591 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.191600 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.191610 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.191625 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.191635 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.191644 | orchestrator | 2026-04-13 00:34:56.191654 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-13 00:34:56.191663 | orchestrator | 2026-04-13 00:34:56.191673 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-13 00:34:56.191682 | orchestrator | Monday 13 April 2026 00:34:51 +0000 (0:00:01.099) 0:08:17.732 ********** 2026-04-13 00:34:56.191699 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.191708 | orchestrator | 2026-04-13 00:34:56.191718 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-13 00:34:56.191727 | orchestrator | Monday 13 April 2026 00:34:52 +0000 (0:00:01.014) 0:08:18.746 ********** 2026-04-13 00:34:56.191736 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.191746 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.191755 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.191764 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.191774 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.191798 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.191808 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.191817 | orchestrator | 2026-04-13 00:34:56.191827 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-13 00:34:56.191847 | orchestrator | Monday 13 April 2026 00:34:53 +0000 (0:00:00.850) 0:08:19.597 ********** 2026-04-13 00:34:56.191856 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.191866 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.191875 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.191885 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.191894 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.191903 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.191912 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.191922 | orchestrator | 2026-04-13 00:34:56.191931 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-13 00:34:56.191940 | orchestrator | Monday 13 April 2026 00:34:54 +0000 (0:00:01.365) 0:08:20.963 ********** 2026-04-13 00:34:56.191950 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.191960 | orchestrator | 2026-04-13 00:34:56.191969 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-13 00:34:56.191978 | orchestrator | Monday 13 April 2026 00:34:55 +0000 (0:00:00.896) 0:08:21.860 ********** 2026-04-13 00:34:56.191988 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.191998 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.192007 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.192094 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.192104 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.192114 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.192123 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.192133 | orchestrator | 2026-04-13 00:34:56.192150 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-13 00:34:57.997296 | orchestrator | Monday 13 April 2026 00:34:56 +0000 (0:00:00.847) 0:08:22.707 ********** 2026-04-13 00:34:57.997394 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:57.997408 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:57.997419 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:57.997428 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:57.997438 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:57.997447 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:57.997457 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:57.997466 | orchestrator | 2026-04-13 00:34:57.997476 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:34:57.997487 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-13 00:34:57.997498 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.997508 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-13 00:34:57.997543 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-13 00:34:57.997553 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.997563 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.997572 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.997582 | orchestrator | 2026-04-13 00:34:57.997591 | orchestrator | 2026-04-13 00:34:57.997601 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:34:57.997610 | orchestrator | Monday 13 April 2026 00:34:57 +0000 (0:00:01.405) 0:08:24.113 ********** 2026-04-13 00:34:57.997620 | orchestrator | =============================================================================== 2026-04-13 00:34:57.997630 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.83s 2026-04-13 00:34:57.997639 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.46s 2026-04-13 00:34:57.997669 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.07s 2026-04-13 00:34:57.997679 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.68s 2026-04-13 00:34:57.997689 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.57s 2026-04-13 00:34:57.997698 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.25s 2026-04-13 00:34:57.997708 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.71s 2026-04-13 00:34:57.997718 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.46s 2026-04-13 00:34:57.997727 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.90s 2026-04-13 00:34:57.997736 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.87s 2026-04-13 00:34:57.997746 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.92s 2026-04-13 00:34:57.997755 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.59s 2026-04-13 00:34:57.997765 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.50s 2026-04-13 00:34:57.997774 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.35s 2026-04-13 00:34:57.997784 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.15s 2026-04-13 00:34:57.997793 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.04s 2026-04-13 00:34:57.997803 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.53s 2026-04-13 00:34:57.997812 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.20s 2026-04-13 00:34:57.997821 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.88s 2026-04-13 00:34:57.997831 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.64s 2026-04-13 00:34:58.208297 | orchestrator | + osism apply fail2ban 2026-04-13 00:35:10.046321 | orchestrator | 2026-04-13 00:35:10 | INFO  | Prepare task for execution of fail2ban. 2026-04-13 00:35:10.141460 | orchestrator | 2026-04-13 00:35:10 | INFO  | Task 91d19fd4-d185-4c64-a527-45f1e422b2c8 (fail2ban) was prepared for execution. 2026-04-13 00:35:10.141545 | orchestrator | 2026-04-13 00:35:10 | INFO  | It takes a moment until task 91d19fd4-d185-4c64-a527-45f1e422b2c8 (fail2ban) has been started and output is visible here. 2026-04-13 00:35:31.781800 | orchestrator | 2026-04-13 00:35:31.781917 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-13 00:35:31.781959 | orchestrator | 2026-04-13 00:35:31.782075 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-13 00:35:31.782092 | orchestrator | Monday 13 April 2026 00:35:13 +0000 (0:00:00.368) 0:00:00.368 ********** 2026-04-13 00:35:31.782106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:35:31.782119 | orchestrator | 2026-04-13 00:35:31.782131 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-13 00:35:31.782142 | orchestrator | Monday 13 April 2026 00:35:15 +0000 (0:00:01.229) 0:00:01.597 ********** 2026-04-13 00:35:31.782153 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:35:31.782166 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:35:31.782176 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:35:31.782187 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:35:31.782198 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:35:31.782208 | orchestrator | changed: [testbed-manager] 2026-04-13 00:35:31.782253 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:35:31.782265 | orchestrator | 2026-04-13 00:35:31.782277 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-13 00:35:31.782288 | orchestrator | Monday 13 April 2026 00:35:26 +0000 (0:00:11.677) 0:00:13.275 ********** 2026-04-13 00:35:31.782299 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:35:31.782309 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:35:31.782320 | orchestrator | changed: [testbed-manager] 2026-04-13 00:35:31.782331 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:35:31.782343 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:35:31.782355 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:35:31.782368 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:35:31.782380 | orchestrator | 2026-04-13 00:35:31.782393 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-13 00:35:31.782405 | orchestrator | Monday 13 April 2026 00:35:28 +0000 (0:00:01.643) 0:00:14.919 ********** 2026-04-13 00:35:31.782417 | orchestrator | ok: [testbed-manager] 2026-04-13 00:35:31.782430 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:35:31.782442 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:35:31.782454 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:35:31.782466 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:35:31.782479 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:35:31.782491 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:35:31.782503 | orchestrator | 2026-04-13 00:35:31.782515 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-13 00:35:31.782527 | orchestrator | Monday 13 April 2026 00:35:29 +0000 (0:00:01.297) 0:00:16.216 ********** 2026-04-13 00:35:31.782554 | orchestrator | changed: [testbed-manager] 2026-04-13 00:35:31.782567 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:35:31.782591 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:35:31.782603 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:35:31.782615 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:35:31.782627 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:35:31.782639 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:35:31.782652 | orchestrator | 2026-04-13 00:35:31.782664 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:35:31.782692 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:31.782706 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:31.782717 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:31.782728 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:31.782748 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:31.782759 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:31.782770 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:31.782780 | orchestrator | 2026-04-13 00:35:31.782791 | orchestrator | 2026-04-13 00:35:31.782802 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:35:31.782813 | orchestrator | Monday 13 April 2026 00:35:31 +0000 (0:00:01.645) 0:00:17.862 ********** 2026-04-13 00:35:31.782824 | orchestrator | =============================================================================== 2026-04-13 00:35:31.782835 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.68s 2026-04-13 00:35:31.782845 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-04-13 00:35:31.782856 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.64s 2026-04-13 00:35:31.782867 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.30s 2026-04-13 00:35:31.782878 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-04-13 00:35:32.001500 | orchestrator | + osism apply network 2026-04-13 00:35:43.444948 | orchestrator | 2026-04-13 00:35:43 | INFO  | Prepare task for execution of network. 2026-04-13 00:35:43.532404 | orchestrator | 2026-04-13 00:35:43 | INFO  | Task 9eba8520-d5b7-4840-a715-797bc32540c4 (network) was prepared for execution. 2026-04-13 00:35:43.532529 | orchestrator | 2026-04-13 00:35:43 | INFO  | It takes a moment until task 9eba8520-d5b7-4840-a715-797bc32540c4 (network) has been started and output is visible here. 2026-04-13 00:36:13.168272 | orchestrator | 2026-04-13 00:36:13.168402 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-13 00:36:13.168429 | orchestrator | 2026-04-13 00:36:13.168452 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-13 00:36:13.168472 | orchestrator | Monday 13 April 2026 00:35:46 +0000 (0:00:00.339) 0:00:00.339 ********** 2026-04-13 00:36:13.168491 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.168504 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:13.168515 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:13.168526 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:13.168536 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:13.168547 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:13.168558 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:13.168568 | orchestrator | 2026-04-13 00:36:13.168579 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-13 00:36:13.168590 | orchestrator | Monday 13 April 2026 00:35:47 +0000 (0:00:00.638) 0:00:00.978 ********** 2026-04-13 00:36:13.168603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:13.168617 | orchestrator | 2026-04-13 00:36:13.168628 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-13 00:36:13.168639 | orchestrator | Monday 13 April 2026 00:35:48 +0000 (0:00:01.207) 0:00:02.185 ********** 2026-04-13 00:36:13.168649 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.168660 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:13.168671 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:13.168681 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:13.168692 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:13.168727 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:13.168739 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:13.168750 | orchestrator | 2026-04-13 00:36:13.168763 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-13 00:36:13.168775 | orchestrator | Monday 13 April 2026 00:35:51 +0000 (0:00:02.743) 0:00:04.928 ********** 2026-04-13 00:36:13.168787 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.168799 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:13.168811 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:13.168823 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:13.168836 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:13.168849 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:13.168861 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:13.168873 | orchestrator | 2026-04-13 00:36:13.168885 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-13 00:36:13.168898 | orchestrator | Monday 13 April 2026 00:35:53 +0000 (0:00:01.628) 0:00:06.557 ********** 2026-04-13 00:36:13.168910 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-13 00:36:13.168956 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-13 00:36:13.168978 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-13 00:36:13.168999 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-13 00:36:13.169018 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-13 00:36:13.169035 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-13 00:36:13.169048 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-13 00:36:13.169061 | orchestrator | 2026-04-13 00:36:13.169074 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-13 00:36:13.169088 | orchestrator | Monday 13 April 2026 00:35:54 +0000 (0:00:01.214) 0:00:07.772 ********** 2026-04-13 00:36:13.169100 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:13.169113 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:13.169124 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:13.169134 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:13.169145 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:13.169155 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:13.169166 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:13.169177 | orchestrator | 2026-04-13 00:36:13.169187 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-13 00:36:13.169200 | orchestrator | Monday 13 April 2026 00:35:54 +0000 (0:00:00.634) 0:00:08.407 ********** 2026-04-13 00:36:13.169211 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:13.169221 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:13.169232 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:13.169242 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:13.169253 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:13.169263 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:13.169274 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:13.169284 | orchestrator | 2026-04-13 00:36:13.169314 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-13 00:36:13.169325 | orchestrator | Monday 13 April 2026 00:35:55 +0000 (0:00:00.806) 0:00:09.214 ********** 2026-04-13 00:36:13.169336 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:13.169347 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:13.169357 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:13.169368 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:13.169378 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:13.169389 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:13.169400 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:13.169410 | orchestrator | 2026-04-13 00:36:13.169421 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-13 00:36:13.169432 | orchestrator | Monday 13 April 2026 00:35:56 +0000 (0:00:00.850) 0:00:10.064 ********** 2026-04-13 00:36:13.169452 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:36:13.169463 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:36:13.169473 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 00:36:13.169484 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 00:36:13.169494 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 00:36:13.169505 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 00:36:13.169516 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 00:36:13.169526 | orchestrator | 2026-04-13 00:36:13.169557 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-13 00:36:13.169569 | orchestrator | Monday 13 April 2026 00:36:00 +0000 (0:00:03.489) 0:00:13.554 ********** 2026-04-13 00:36:13.169580 | orchestrator | changed: [testbed-manager] 2026-04-13 00:36:13.169591 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:36:13.169601 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:36:13.169612 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:36:13.169622 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:36:13.169633 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:36:13.169643 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:36:13.169654 | orchestrator | 2026-04-13 00:36:13.169665 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-13 00:36:13.169676 | orchestrator | Monday 13 April 2026 00:36:01 +0000 (0:00:01.681) 0:00:15.236 ********** 2026-04-13 00:36:13.169686 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:36:13.169698 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:36:13.169716 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 00:36:13.169727 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 00:36:13.169738 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 00:36:13.169748 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 00:36:13.169759 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 00:36:13.169769 | orchestrator | 2026-04-13 00:36:13.169780 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-13 00:36:13.169791 | orchestrator | Monday 13 April 2026 00:36:03 +0000 (0:00:01.826) 0:00:17.062 ********** 2026-04-13 00:36:13.169801 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.169812 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:13.169823 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:13.169833 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:13.169844 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:13.169854 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:13.169864 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:13.169875 | orchestrator | 2026-04-13 00:36:13.169886 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-13 00:36:13.169896 | orchestrator | Monday 13 April 2026 00:36:04 +0000 (0:00:01.131) 0:00:18.194 ********** 2026-04-13 00:36:13.169907 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:13.169918 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:13.169954 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:13.169966 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:13.169977 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:13.169988 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:13.169998 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:13.170009 | orchestrator | 2026-04-13 00:36:13.170079 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-13 00:36:13.170092 | orchestrator | Monday 13 April 2026 00:36:05 +0000 (0:00:00.669) 0:00:18.864 ********** 2026-04-13 00:36:13.170102 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.170113 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:13.170124 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:13.170135 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:13.170145 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:13.170162 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:13.170173 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:13.170184 | orchestrator | 2026-04-13 00:36:13.170203 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-13 00:36:13.170213 | orchestrator | Monday 13 April 2026 00:36:07 +0000 (0:00:02.207) 0:00:21.072 ********** 2026-04-13 00:36:13.170224 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:13.170235 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:13.170246 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:13.170256 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:13.170267 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:13.170278 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:13.170288 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-13 00:36:13.170300 | orchestrator | 2026-04-13 00:36:13.170311 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-13 00:36:13.170322 | orchestrator | Monday 13 April 2026 00:36:08 +0000 (0:00:00.924) 0:00:21.996 ********** 2026-04-13 00:36:13.170332 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.170343 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:36:13.170353 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:36:13.170364 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:36:13.170375 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:36:13.170385 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:36:13.170396 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:36:13.170406 | orchestrator | 2026-04-13 00:36:13.170417 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-13 00:36:13.170428 | orchestrator | Monday 13 April 2026 00:36:10 +0000 (0:00:01.634) 0:00:23.631 ********** 2026-04-13 00:36:13.170439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:13.170452 | orchestrator | 2026-04-13 00:36:13.170463 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-13 00:36:13.170473 | orchestrator | Monday 13 April 2026 00:36:11 +0000 (0:00:01.268) 0:00:24.900 ********** 2026-04-13 00:36:13.170484 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:13.170495 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.170505 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:13.170516 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:13.170526 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:13.170537 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:13.170547 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:13.170558 | orchestrator | 2026-04-13 00:36:13.170569 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-13 00:36:13.170580 | orchestrator | Monday 13 April 2026 00:36:12 +0000 (0:00:01.200) 0:00:26.100 ********** 2026-04-13 00:36:13.170590 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:13.170601 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:13.170612 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:13.170622 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:13.170633 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:13.170651 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:31.121316 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:31.121430 | orchestrator | 2026-04-13 00:36:31.121448 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-13 00:36:31.121462 | orchestrator | Monday 13 April 2026 00:36:13 +0000 (0:00:00.655) 0:00:26.756 ********** 2026-04-13 00:36:31.121475 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:31.121487 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:31.121500 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:31.121513 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:31.121525 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:31.121567 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:31.121583 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:31.121599 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:31.121615 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:31.121630 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:31.121646 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:31.121662 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:31.121678 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:31.121694 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:31.121709 | orchestrator | 2026-04-13 00:36:31.121726 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-13 00:36:31.121742 | orchestrator | Monday 13 April 2026 00:36:14 +0000 (0:00:01.345) 0:00:28.101 ********** 2026-04-13 00:36:31.121758 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:31.121773 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:31.121787 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:31.121801 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:31.121816 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:31.121831 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:31.121846 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:31.121861 | orchestrator | 2026-04-13 00:36:31.121878 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-13 00:36:31.121894 | orchestrator | Monday 13 April 2026 00:36:15 +0000 (0:00:00.659) 0:00:28.761 ********** 2026-04-13 00:36:31.121988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-4, testbed-node-3, testbed-node-2, testbed-node-5 2026-04-13 00:36:31.122009 | orchestrator | 2026-04-13 00:36:31.122100 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-13 00:36:31.122121 | orchestrator | Monday 13 April 2026 00:36:19 +0000 (0:00:04.679) 0:00:33.440 ********** 2026-04-13 00:36:31.122138 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-13 00:36:31.122158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122220 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-13 00:36:31.122288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-13 00:36:31.122306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-13 00:36:31.122338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-13 00:36:31.122347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-13 00:36:31.122356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-13 00:36:31.122365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-13 00:36:31.122373 | orchestrator | 2026-04-13 00:36:31.122388 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-13 00:36:31.122397 | orchestrator | Monday 13 April 2026 00:36:25 +0000 (0:00:05.889) 0:00:39.330 ********** 2026-04-13 00:36:31.122405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122414 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-13 00:36:31.122423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:31.122456 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-13 00:36:31.122465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-13 00:36:31.122481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:43.842322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:43.842422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-13 00:36:43.842439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-13 00:36:43.842450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-13 00:36:43.842461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-13 00:36:43.842472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-13 00:36:43.842483 | orchestrator | 2026-04-13 00:36:43.842495 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-13 00:36:43.842506 | orchestrator | Monday 13 April 2026 00:36:32 +0000 (0:00:06.166) 0:00:45.496 ********** 2026-04-13 00:36:43.842532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:43.842543 | orchestrator | 2026-04-13 00:36:43.842554 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-13 00:36:43.842564 | orchestrator | Monday 13 April 2026 00:36:33 +0000 (0:00:01.420) 0:00:46.917 ********** 2026-04-13 00:36:43.842574 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.842585 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.842595 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.842606 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.842616 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.842625 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.842635 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.842664 | orchestrator | 2026-04-13 00:36:43.842675 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-13 00:36:43.842685 | orchestrator | Monday 13 April 2026 00:36:34 +0000 (0:00:00.957) 0:00:47.874 ********** 2026-04-13 00:36:43.842695 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.842705 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.842716 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.842725 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.842735 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.842746 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.842756 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.842765 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.842775 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.842785 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.842795 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.842805 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.842815 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.842824 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.842834 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.842844 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.842854 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.842864 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.842920 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.842933 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.842943 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.842952 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.842962 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.842971 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.842981 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.842990 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.843000 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.843009 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.843018 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.843028 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.843037 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.843047 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.843056 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.843066 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.843075 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.843085 | orchestrator | 2026-04-13 00:36:43.843094 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-13 00:36:43.843111 | orchestrator | Monday 13 April 2026 00:36:35 +0000 (0:00:01.000) 0:00:48.875 ********** 2026-04-13 00:36:43.843121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:43.843131 | orchestrator | 2026-04-13 00:36:43.843141 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-13 00:36:43.843150 | orchestrator | Monday 13 April 2026 00:36:36 +0000 (0:00:01.268) 0:00:50.143 ********** 2026-04-13 00:36:43.843165 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.843175 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.843185 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.843195 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.843204 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.843214 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.843223 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.843232 | orchestrator | 2026-04-13 00:36:43.843242 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-13 00:36:43.843251 | orchestrator | Monday 13 April 2026 00:36:37 +0000 (0:00:00.648) 0:00:50.792 ********** 2026-04-13 00:36:43.843261 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.843270 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.843279 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.843288 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.843298 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.843307 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.843316 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.843326 | orchestrator | 2026-04-13 00:36:43.843335 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-13 00:36:43.843345 | orchestrator | Monday 13 April 2026 00:36:38 +0000 (0:00:00.846) 0:00:51.638 ********** 2026-04-13 00:36:43.843354 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.843364 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.843373 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.843383 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.843392 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.843401 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.843411 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.843420 | orchestrator | 2026-04-13 00:36:43.843430 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-13 00:36:43.843439 | orchestrator | Monday 13 April 2026 00:36:38 +0000 (0:00:00.664) 0:00:52.303 ********** 2026-04-13 00:36:43.843449 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.843458 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.843467 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.843477 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.843486 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.843496 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.843505 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.843515 | orchestrator | 2026-04-13 00:36:43.843524 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-13 00:36:43.843534 | orchestrator | Monday 13 April 2026 00:36:40 +0000 (0:00:01.709) 0:00:54.013 ********** 2026-04-13 00:36:43.843543 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.843553 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.843562 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.843571 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.843581 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.843590 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.843599 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.843608 | orchestrator | 2026-04-13 00:36:43.843618 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-13 00:36:43.843627 | orchestrator | Monday 13 April 2026 00:36:41 +0000 (0:00:01.161) 0:00:55.175 ********** 2026-04-13 00:36:43.843641 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.843651 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.843660 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.843670 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.843679 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.843688 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.843697 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.843707 | orchestrator | 2026-04-13 00:36:43.843723 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-13 00:36:45.627814 | orchestrator | Monday 13 April 2026 00:36:43 +0000 (0:00:02.126) 0:00:57.301 ********** 2026-04-13 00:36:45.627999 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:45.628021 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:45.628033 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:45.628044 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:45.628055 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:45.628066 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:45.628077 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:45.628969 | orchestrator | 2026-04-13 00:36:45.629004 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-13 00:36:45.629025 | orchestrator | Monday 13 April 2026 00:36:44 +0000 (0:00:00.813) 0:00:58.114 ********** 2026-04-13 00:36:45.629042 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:45.629060 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:45.629077 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:45.629095 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:45.629112 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:45.629128 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:45.629145 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:45.629163 | orchestrator | 2026-04-13 00:36:45.629181 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:36:45.629200 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-13 00:36:45.629219 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 00:36:45.629239 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 00:36:45.629258 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 00:36:45.629277 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 00:36:45.629296 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 00:36:45.629315 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 00:36:45.629340 | orchestrator | 2026-04-13 00:36:45.629361 | orchestrator | 2026-04-13 00:36:45.629380 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:36:45.629399 | orchestrator | Monday 13 April 2026 00:36:45 +0000 (0:00:00.585) 0:00:58.700 ********** 2026-04-13 00:36:45.629418 | orchestrator | =============================================================================== 2026-04-13 00:36:45.629436 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.17s 2026-04-13 00:36:45.629455 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.89s 2026-04-13 00:36:45.629473 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.68s 2026-04-13 00:36:45.629530 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.49s 2026-04-13 00:36:45.629552 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.74s 2026-04-13 00:36:45.629570 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2026-04-13 00:36:45.629587 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.13s 2026-04-13 00:36:45.629606 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.83s 2026-04-13 00:36:45.629624 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.71s 2026-04-13 00:36:45.629643 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2026-04-13 00:36:45.629662 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2026-04-13 00:36:45.629680 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2026-04-13 00:36:45.629698 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.42s 2026-04-13 00:36:45.629716 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.35s 2026-04-13 00:36:45.629735 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.27s 2026-04-13 00:36:45.629754 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.27s 2026-04-13 00:36:45.629773 | orchestrator | osism.commons.network : Create required directories --------------------- 1.21s 2026-04-13 00:36:45.629791 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.21s 2026-04-13 00:36:45.629810 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2026-04-13 00:36:45.629828 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.16s 2026-04-13 00:36:45.817356 | orchestrator | + osism apply wireguard 2026-04-13 00:36:57.139798 | orchestrator | 2026-04-13 00:36:57 | INFO  | Prepare task for execution of wireguard. 2026-04-13 00:36:57.217548 | orchestrator | 2026-04-13 00:36:57 | INFO  | Task 77bf2894-577d-42e4-93af-595d1e7ffed1 (wireguard) was prepared for execution. 2026-04-13 00:36:57.217639 | orchestrator | 2026-04-13 00:36:57 | INFO  | It takes a moment until task 77bf2894-577d-42e4-93af-595d1e7ffed1 (wireguard) has been started and output is visible here. 2026-04-13 00:37:16.917075 | orchestrator | 2026-04-13 00:37:16.917187 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-13 00:37:16.917204 | orchestrator | 2026-04-13 00:37:16.917217 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-13 00:37:16.917229 | orchestrator | Monday 13 April 2026 00:37:00 +0000 (0:00:00.308) 0:00:00.308 ********** 2026-04-13 00:37:16.917240 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:16.917252 | orchestrator | 2026-04-13 00:37:16.917263 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-13 00:37:16.917274 | orchestrator | Monday 13 April 2026 00:37:02 +0000 (0:00:01.949) 0:00:02.257 ********** 2026-04-13 00:37:16.917285 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:16.917296 | orchestrator | 2026-04-13 00:37:16.917307 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-13 00:37:16.917318 | orchestrator | Monday 13 April 2026 00:37:09 +0000 (0:00:06.447) 0:00:08.704 ********** 2026-04-13 00:37:16.917329 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:16.917339 | orchestrator | 2026-04-13 00:37:16.917350 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-13 00:37:16.917383 | orchestrator | Monday 13 April 2026 00:37:09 +0000 (0:00:00.494) 0:00:09.199 ********** 2026-04-13 00:37:16.917395 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:16.917406 | orchestrator | 2026-04-13 00:37:16.917417 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-13 00:37:16.917428 | orchestrator | Monday 13 April 2026 00:37:09 +0000 (0:00:00.469) 0:00:09.668 ********** 2026-04-13 00:37:16.917462 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:16.917474 | orchestrator | 2026-04-13 00:37:16.917485 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-13 00:37:16.917496 | orchestrator | Monday 13 April 2026 00:37:10 +0000 (0:00:00.596) 0:00:10.264 ********** 2026-04-13 00:37:16.917506 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:16.917517 | orchestrator | 2026-04-13 00:37:16.917528 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-13 00:37:16.917539 | orchestrator | Monday 13 April 2026 00:37:11 +0000 (0:00:00.432) 0:00:10.697 ********** 2026-04-13 00:37:16.917549 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:16.917561 | orchestrator | 2026-04-13 00:37:16.917571 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-13 00:37:16.917587 | orchestrator | Monday 13 April 2026 00:37:11 +0000 (0:00:00.432) 0:00:11.129 ********** 2026-04-13 00:37:16.917598 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:16.917609 | orchestrator | 2026-04-13 00:37:16.917622 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-13 00:37:16.917635 | orchestrator | Monday 13 April 2026 00:37:12 +0000 (0:00:01.200) 0:00:12.330 ********** 2026-04-13 00:37:16.917648 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:37:16.917660 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:16.917672 | orchestrator | 2026-04-13 00:37:16.917685 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-13 00:37:16.917697 | orchestrator | Monday 13 April 2026 00:37:13 +0000 (0:00:00.964) 0:00:13.294 ********** 2026-04-13 00:37:16.917710 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:16.917722 | orchestrator | 2026-04-13 00:37:16.917734 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-13 00:37:16.917746 | orchestrator | Monday 13 April 2026 00:37:15 +0000 (0:00:02.100) 0:00:15.395 ********** 2026-04-13 00:37:16.917759 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:16.917771 | orchestrator | 2026-04-13 00:37:16.917783 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:37:16.917796 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:37:16.917809 | orchestrator | 2026-04-13 00:37:16.917822 | orchestrator | 2026-04-13 00:37:16.917834 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:37:16.917847 | orchestrator | Monday 13 April 2026 00:37:16 +0000 (0:00:00.951) 0:00:16.346 ********** 2026-04-13 00:37:16.917882 | orchestrator | =============================================================================== 2026-04-13 00:37:16.917895 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.45s 2026-04-13 00:37:16.917907 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.10s 2026-04-13 00:37:16.917920 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.95s 2026-04-13 00:37:16.917933 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2026-04-13 00:37:16.917945 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-04-13 00:37:16.917957 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-04-13 00:37:16.917970 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.60s 2026-04-13 00:37:16.917983 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2026-04-13 00:37:16.917994 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2026-04-13 00:37:16.918004 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-04-13 00:37:16.918065 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-04-13 00:37:17.109728 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-13 00:37:17.135344 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-13 00:37:17.135449 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-13 00:37:17.214068 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 190 0 --:--:-- --:--:-- --:--:-- 192 2026-04-13 00:37:17.225238 | orchestrator | + osism apply --environment custom workarounds 2026-04-13 00:37:18.500275 | orchestrator | 2026-04-13 00:37:18 | INFO  | Trying to run play workarounds in environment custom 2026-04-13 00:37:28.566307 | orchestrator | 2026-04-13 00:37:28 | INFO  | Prepare task for execution of workarounds. 2026-04-13 00:37:28.666674 | orchestrator | 2026-04-13 00:37:28 | INFO  | Task c41a7461-66b3-4af3-85a8-e87ac1d20635 (workarounds) was prepared for execution. 2026-04-13 00:37:28.666761 | orchestrator | 2026-04-13 00:37:28 | INFO  | It takes a moment until task c41a7461-66b3-4af3-85a8-e87ac1d20635 (workarounds) has been started and output is visible here. 2026-04-13 00:37:53.688284 | orchestrator | 2026-04-13 00:37:53.688403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:37:53.688421 | orchestrator | 2026-04-13 00:37:53.688433 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-13 00:37:53.688445 | orchestrator | Monday 13 April 2026 00:37:31 +0000 (0:00:00.176) 0:00:00.176 ********** 2026-04-13 00:37:53.688457 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-13 00:37:53.688468 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-13 00:37:53.688480 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-13 00:37:53.688490 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-13 00:37:53.688501 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-13 00:37:53.688512 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-13 00:37:53.688522 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-13 00:37:53.688533 | orchestrator | 2026-04-13 00:37:53.688544 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-13 00:37:53.688556 | orchestrator | 2026-04-13 00:37:53.688567 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-13 00:37:53.688577 | orchestrator | Monday 13 April 2026 00:37:32 +0000 (0:00:00.731) 0:00:00.908 ********** 2026-04-13 00:37:53.688605 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:53.688617 | orchestrator | 2026-04-13 00:37:53.688629 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-13 00:37:53.688639 | orchestrator | 2026-04-13 00:37:53.688650 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-13 00:37:53.688662 | orchestrator | Monday 13 April 2026 00:37:35 +0000 (0:00:02.896) 0:00:03.805 ********** 2026-04-13 00:37:53.688672 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:37:53.688683 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:37:53.688694 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:37:53.688705 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:37:53.688715 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:37:53.688726 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:37:53.688737 | orchestrator | 2026-04-13 00:37:53.688748 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-13 00:37:53.688759 | orchestrator | 2026-04-13 00:37:53.688770 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-13 00:37:53.688781 | orchestrator | Monday 13 April 2026 00:37:37 +0000 (0:00:02.463) 0:00:06.269 ********** 2026-04-13 00:37:53.688792 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:53.688805 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:53.688881 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:53.688895 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:53.688908 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:53.688921 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:53.688933 | orchestrator | 2026-04-13 00:37:53.688945 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-13 00:37:53.688960 | orchestrator | Monday 13 April 2026 00:37:39 +0000 (0:00:01.367) 0:00:07.637 ********** 2026-04-13 00:37:53.688979 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:53.689000 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:53.689029 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:53.689047 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:53.689064 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:53.689082 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:53.689098 | orchestrator | 2026-04-13 00:37:53.689113 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-13 00:37:53.689129 | orchestrator | Monday 13 April 2026 00:37:43 +0000 (0:00:03.857) 0:00:11.494 ********** 2026-04-13 00:37:53.689144 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:37:53.689160 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:37:53.689177 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:37:53.689194 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:37:53.689212 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:37:53.689230 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:37:53.689247 | orchestrator | 2026-04-13 00:37:53.689263 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-13 00:37:53.689279 | orchestrator | 2026-04-13 00:37:53.689297 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-13 00:37:53.689314 | orchestrator | Monday 13 April 2026 00:37:43 +0000 (0:00:00.552) 0:00:12.046 ********** 2026-04-13 00:37:53.689331 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:53.689347 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:53.689365 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:53.689382 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:53.689400 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:53.689417 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:53.689434 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:53.689450 | orchestrator | 2026-04-13 00:37:53.689468 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-13 00:37:53.689485 | orchestrator | Monday 13 April 2026 00:37:45 +0000 (0:00:01.784) 0:00:13.831 ********** 2026-04-13 00:37:53.689502 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:53.689520 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:53.689538 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:53.689555 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:53.689574 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:53.689593 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:53.689637 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:53.689658 | orchestrator | 2026-04-13 00:37:53.689676 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-13 00:37:53.689693 | orchestrator | Monday 13 April 2026 00:37:46 +0000 (0:00:01.491) 0:00:15.322 ********** 2026-04-13 00:37:53.689710 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:37:53.689728 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:37:53.689746 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:37:53.689764 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:37:53.689781 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:53.689800 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:37:53.689857 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:37:53.689895 | orchestrator | 2026-04-13 00:37:53.689913 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-13 00:37:53.689929 | orchestrator | Monday 13 April 2026 00:37:48 +0000 (0:00:01.731) 0:00:17.054 ********** 2026-04-13 00:37:53.689944 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:53.689960 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:53.689977 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:53.689996 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:53.690014 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:53.690099 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:53.690117 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:53.690137 | orchestrator | 2026-04-13 00:37:53.690158 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-13 00:37:53.690176 | orchestrator | Monday 13 April 2026 00:37:50 +0000 (0:00:01.581) 0:00:18.636 ********** 2026-04-13 00:37:53.690207 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:37:53.690225 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:37:53.690243 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:37:53.690261 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:37:53.690279 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:37:53.690299 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:37:53.690319 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:37:53.690339 | orchestrator | 2026-04-13 00:37:53.690359 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-13 00:37:53.690377 | orchestrator | 2026-04-13 00:37:53.690395 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-13 00:37:53.690413 | orchestrator | Monday 13 April 2026 00:37:51 +0000 (0:00:00.800) 0:00:19.436 ********** 2026-04-13 00:37:53.690431 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:37:53.690449 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:37:53.690467 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:37:53.690486 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:37:53.690505 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:53.690523 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:37:53.690540 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:37:53.690559 | orchestrator | 2026-04-13 00:37:53.690582 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:37:53.690611 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:37:53.690631 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:53.690648 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:53.690666 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:53.690683 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:53.690698 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:53.690716 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:53.690733 | orchestrator | 2026-04-13 00:37:53.690751 | orchestrator | 2026-04-13 00:37:53.690769 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:37:53.690790 | orchestrator | Monday 13 April 2026 00:37:53 +0000 (0:00:02.551) 0:00:21.988 ********** 2026-04-13 00:37:53.690810 | orchestrator | =============================================================================== 2026-04-13 00:37:53.690877 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.86s 2026-04-13 00:37:53.690890 | orchestrator | Apply netplan configuration --------------------------------------------- 2.90s 2026-04-13 00:37:53.690901 | orchestrator | Install python3-docker -------------------------------------------------- 2.55s 2026-04-13 00:37:53.690912 | orchestrator | Apply netplan configuration --------------------------------------------- 2.46s 2026-04-13 00:37:53.690922 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.78s 2026-04-13 00:37:53.690933 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.73s 2026-04-13 00:37:53.690944 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.58s 2026-04-13 00:37:53.690954 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.49s 2026-04-13 00:37:53.690965 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.37s 2026-04-13 00:37:53.690976 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.80s 2026-04-13 00:37:53.690987 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2026-04-13 00:37:53.691015 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.55s 2026-04-13 00:37:54.230868 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-13 00:38:05.618215 | orchestrator | 2026-04-13 00:38:05 | INFO  | Prepare task for execution of reboot. 2026-04-13 00:38:05.693221 | orchestrator | 2026-04-13 00:38:05 | INFO  | Task f143fca7-f6d7-4258-b452-f86b8566f6c0 (reboot) was prepared for execution. 2026-04-13 00:38:05.693349 | orchestrator | 2026-04-13 00:38:05 | INFO  | It takes a moment until task f143fca7-f6d7-4258-b452-f86b8566f6c0 (reboot) has been started and output is visible here. 2026-04-13 00:38:17.208280 | orchestrator | 2026-04-13 00:38:17.208373 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:17.208387 | orchestrator | 2026-04-13 00:38:17.208398 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:17.208408 | orchestrator | Monday 13 April 2026 00:38:08 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-04-13 00:38:17.208418 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:38:17.208428 | orchestrator | 2026-04-13 00:38:17.208438 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:17.208447 | orchestrator | Monday 13 April 2026 00:38:09 +0000 (0:00:00.153) 0:00:00.405 ********** 2026-04-13 00:38:17.208457 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:38:17.208466 | orchestrator | 2026-04-13 00:38:17.208490 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:17.208500 | orchestrator | Monday 13 April 2026 00:38:10 +0000 (0:00:01.257) 0:00:01.662 ********** 2026-04-13 00:38:17.208510 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:38:17.208519 | orchestrator | 2026-04-13 00:38:17.208529 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:17.208538 | orchestrator | 2026-04-13 00:38:17.208548 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:17.208557 | orchestrator | Monday 13 April 2026 00:38:10 +0000 (0:00:00.120) 0:00:01.783 ********** 2026-04-13 00:38:17.208567 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:38:17.208576 | orchestrator | 2026-04-13 00:38:17.208586 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:17.208595 | orchestrator | Monday 13 April 2026 00:38:10 +0000 (0:00:00.111) 0:00:01.894 ********** 2026-04-13 00:38:17.208605 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:38:17.208614 | orchestrator | 2026-04-13 00:38:17.208624 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:17.208634 | orchestrator | Monday 13 April 2026 00:38:11 +0000 (0:00:01.077) 0:00:02.972 ********** 2026-04-13 00:38:17.208643 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:38:17.208673 | orchestrator | 2026-04-13 00:38:17.208684 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:17.208693 | orchestrator | 2026-04-13 00:38:17.208703 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:17.208713 | orchestrator | Monday 13 April 2026 00:38:11 +0000 (0:00:00.132) 0:00:03.105 ********** 2026-04-13 00:38:17.208722 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:38:17.208732 | orchestrator | 2026-04-13 00:38:17.208741 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:17.208751 | orchestrator | Monday 13 April 2026 00:38:11 +0000 (0:00:00.095) 0:00:03.200 ********** 2026-04-13 00:38:17.208760 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:38:17.208770 | orchestrator | 2026-04-13 00:38:17.208779 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:17.208789 | orchestrator | Monday 13 April 2026 00:38:12 +0000 (0:00:01.058) 0:00:04.259 ********** 2026-04-13 00:38:17.208902 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:38:17.208914 | orchestrator | 2026-04-13 00:38:17.208926 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:17.208937 | orchestrator | 2026-04-13 00:38:17.208949 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:17.208959 | orchestrator | Monday 13 April 2026 00:38:13 +0000 (0:00:00.123) 0:00:04.382 ********** 2026-04-13 00:38:17.208970 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:38:17.208981 | orchestrator | 2026-04-13 00:38:17.208992 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:17.209003 | orchestrator | Monday 13 April 2026 00:38:13 +0000 (0:00:00.110) 0:00:04.493 ********** 2026-04-13 00:38:17.209014 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:38:17.209025 | orchestrator | 2026-04-13 00:38:17.209035 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:17.209046 | orchestrator | Monday 13 April 2026 00:38:14 +0000 (0:00:01.078) 0:00:05.571 ********** 2026-04-13 00:38:17.209057 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:38:17.209068 | orchestrator | 2026-04-13 00:38:17.209079 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:17.209089 | orchestrator | 2026-04-13 00:38:17.209101 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:17.209111 | orchestrator | Monday 13 April 2026 00:38:14 +0000 (0:00:00.100) 0:00:05.672 ********** 2026-04-13 00:38:17.209122 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:38:17.209133 | orchestrator | 2026-04-13 00:38:17.209144 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:17.209155 | orchestrator | Monday 13 April 2026 00:38:14 +0000 (0:00:00.213) 0:00:05.886 ********** 2026-04-13 00:38:17.209166 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:38:17.209176 | orchestrator | 2026-04-13 00:38:17.209187 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:17.209196 | orchestrator | Monday 13 April 2026 00:38:15 +0000 (0:00:01.034) 0:00:06.921 ********** 2026-04-13 00:38:17.209206 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:38:17.209215 | orchestrator | 2026-04-13 00:38:17.209225 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:17.209234 | orchestrator | 2026-04-13 00:38:17.209244 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:17.209254 | orchestrator | Monday 13 April 2026 00:38:15 +0000 (0:00:00.130) 0:00:07.051 ********** 2026-04-13 00:38:17.209263 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:38:17.209273 | orchestrator | 2026-04-13 00:38:17.209282 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:17.209292 | orchestrator | Monday 13 April 2026 00:38:15 +0000 (0:00:00.098) 0:00:07.149 ********** 2026-04-13 00:38:17.209301 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:38:17.209311 | orchestrator | 2026-04-13 00:38:17.209329 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:17.209339 | orchestrator | Monday 13 April 2026 00:38:16 +0000 (0:00:01.042) 0:00:08.192 ********** 2026-04-13 00:38:17.209369 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:38:17.209379 | orchestrator | 2026-04-13 00:38:17.209389 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:38:17.209400 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:17.209411 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:17.209427 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:17.209437 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:17.209447 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:17.209456 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:17.209466 | orchestrator | 2026-04-13 00:38:17.209475 | orchestrator | 2026-04-13 00:38:17.209485 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:38:17.209494 | orchestrator | Monday 13 April 2026 00:38:16 +0000 (0:00:00.043) 0:00:08.236 ********** 2026-04-13 00:38:17.209504 | orchestrator | =============================================================================== 2026-04-13 00:38:17.209513 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.55s 2026-04-13 00:38:17.209523 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2026-04-13 00:38:17.209533 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2026-04-13 00:38:17.437521 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-13 00:38:28.823940 | orchestrator | 2026-04-13 00:38:28 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-13 00:38:28.901323 | orchestrator | 2026-04-13 00:38:28 | INFO  | Task 33356249-b964-4284-8651-73dff31da3ba (wait-for-connection) was prepared for execution. 2026-04-13 00:38:28.901555 | orchestrator | 2026-04-13 00:38:28 | INFO  | It takes a moment until task 33356249-b964-4284-8651-73dff31da3ba (wait-for-connection) has been started and output is visible here. 2026-04-13 00:38:44.334431 | orchestrator | 2026-04-13 00:38:44.334547 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-13 00:38:44.334565 | orchestrator | 2026-04-13 00:38:44.334578 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-13 00:38:44.334589 | orchestrator | Monday 13 April 2026 00:38:32 +0000 (0:00:00.400) 0:00:00.400 ********** 2026-04-13 00:38:44.334600 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:38:44.334613 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:38:44.334624 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:38:44.334635 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:38:44.334646 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:38:44.334657 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:38:44.334668 | orchestrator | 2026-04-13 00:38:44.334679 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:38:44.334691 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:44.334717 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:44.334754 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:44.334830 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:44.334845 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:44.334856 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:44.334867 | orchestrator | 2026-04-13 00:38:44.334878 | orchestrator | 2026-04-13 00:38:44.334889 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:38:44.334899 | orchestrator | Monday 13 April 2026 00:38:44 +0000 (0:00:11.522) 0:00:11.922 ********** 2026-04-13 00:38:44.334910 | orchestrator | =============================================================================== 2026-04-13 00:38:44.334921 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-04-13 00:38:44.609888 | orchestrator | + osism apply hddtemp 2026-04-13 00:38:55.945486 | orchestrator | 2026-04-13 00:38:55 | INFO  | Prepare task for execution of hddtemp. 2026-04-13 00:38:56.036908 | orchestrator | 2026-04-13 00:38:56 | INFO  | Task a3cffd38-0d33-4d2e-930e-744e9c617884 (hddtemp) was prepared for execution. 2026-04-13 00:38:56.037030 | orchestrator | 2026-04-13 00:38:56 | INFO  | It takes a moment until task a3cffd38-0d33-4d2e-930e-744e9c617884 (hddtemp) has been started and output is visible here. 2026-04-13 00:39:23.575032 | orchestrator | 2026-04-13 00:39:23.575130 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-13 00:39:23.575141 | orchestrator | 2026-04-13 00:39:23.575148 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-13 00:39:23.575156 | orchestrator | Monday 13 April 2026 00:38:59 +0000 (0:00:00.355) 0:00:00.355 ********** 2026-04-13 00:39:23.575163 | orchestrator | ok: [testbed-manager] 2026-04-13 00:39:23.575170 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:39:23.575177 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:39:23.575183 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:39:23.575190 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:39:23.575210 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:39:23.575217 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:39:23.575224 | orchestrator | 2026-04-13 00:39:23.575231 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-13 00:39:23.575238 | orchestrator | Monday 13 April 2026 00:39:00 +0000 (0:00:00.654) 0:00:01.009 ********** 2026-04-13 00:39:23.575247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:39:23.575256 | orchestrator | 2026-04-13 00:39:23.575263 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-13 00:39:23.575269 | orchestrator | Monday 13 April 2026 00:39:01 +0000 (0:00:01.182) 0:00:02.192 ********** 2026-04-13 00:39:23.575276 | orchestrator | ok: [testbed-manager] 2026-04-13 00:39:23.575283 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:39:23.575289 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:39:23.575296 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:39:23.575302 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:39:23.575308 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:39:23.575315 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:39:23.575322 | orchestrator | 2026-04-13 00:39:23.575329 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-13 00:39:23.575335 | orchestrator | Monday 13 April 2026 00:39:03 +0000 (0:00:02.576) 0:00:04.769 ********** 2026-04-13 00:39:23.575342 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:23.575368 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:39:23.575375 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:39:23.575382 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:39:23.575389 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:39:23.575396 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:39:23.575403 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:39:23.575410 | orchestrator | 2026-04-13 00:39:23.575416 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-13 00:39:23.575423 | orchestrator | Monday 13 April 2026 00:39:05 +0000 (0:00:01.137) 0:00:05.906 ********** 2026-04-13 00:39:23.575430 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:39:23.575437 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:39:23.575444 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:39:23.575451 | orchestrator | ok: [testbed-manager] 2026-04-13 00:39:23.575457 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:39:23.575464 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:39:23.575471 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:39:23.575478 | orchestrator | 2026-04-13 00:39:23.575484 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-13 00:39:23.575491 | orchestrator | Monday 13 April 2026 00:39:06 +0000 (0:00:01.326) 0:00:07.232 ********** 2026-04-13 00:39:23.575498 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:39:23.575505 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:39:23.575512 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:39:23.575554 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:39:23.575561 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:39:23.575568 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:23.575575 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:39:23.575581 | orchestrator | 2026-04-13 00:39:23.575588 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-13 00:39:23.575596 | orchestrator | Monday 13 April 2026 00:39:06 +0000 (0:00:00.619) 0:00:07.852 ********** 2026-04-13 00:39:23.575603 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:23.575611 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:39:23.575618 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:39:23.575625 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:39:23.575632 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:39:23.575640 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:39:23.575647 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:39:23.575654 | orchestrator | 2026-04-13 00:39:23.575661 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-13 00:39:23.575669 | orchestrator | Monday 13 April 2026 00:39:20 +0000 (0:00:13.103) 0:00:20.955 ********** 2026-04-13 00:39:23.575677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:39:23.575685 | orchestrator | 2026-04-13 00:39:23.575692 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-13 00:39:23.575700 | orchestrator | Monday 13 April 2026 00:39:21 +0000 (0:00:01.195) 0:00:22.150 ********** 2026-04-13 00:39:23.575707 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:23.575715 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:39:23.575722 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:39:23.575729 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:39:23.575765 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:39:23.575772 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:39:23.575778 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:39:23.575784 | orchestrator | 2026-04-13 00:39:23.575790 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:39:23.575798 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:39:23.575829 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:23.575837 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:23.575845 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:23.575857 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:23.575865 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:23.575872 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:23.575880 | orchestrator | 2026-04-13 00:39:23.575886 | orchestrator | 2026-04-13 00:39:23.575893 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:39:23.575901 | orchestrator | Monday 13 April 2026 00:39:23 +0000 (0:00:01.964) 0:00:24.115 ********** 2026-04-13 00:39:23.575908 | orchestrator | =============================================================================== 2026-04-13 00:39:23.575916 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.10s 2026-04-13 00:39:23.575923 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.58s 2026-04-13 00:39:23.575930 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2026-04-13 00:39:23.575938 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.33s 2026-04-13 00:39:23.575945 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.20s 2026-04-13 00:39:23.575952 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2026-04-13 00:39:23.575960 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2026-04-13 00:39:23.575967 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.65s 2026-04-13 00:39:23.575973 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.62s 2026-04-13 00:39:23.798226 | orchestrator | ++ semver latest 7.1.1 2026-04-13 00:39:23.849309 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 00:39:23.849393 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 00:39:23.849406 | orchestrator | + sudo systemctl restart manager.service 2026-04-13 00:39:37.123209 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-13 00:39:37.123325 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-13 00:39:37.123353 | orchestrator | + local max_attempts=60 2026-04-13 00:39:37.123375 | orchestrator | + local name=ceph-ansible 2026-04-13 00:39:37.123390 | orchestrator | + local attempt_num=1 2026-04-13 00:39:37.123401 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:37.152960 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:37.153027 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:37.153039 | orchestrator | + sleep 5 2026-04-13 00:39:42.156344 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:42.178977 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:42.179063 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:42.179088 | orchestrator | + sleep 5 2026-04-13 00:39:47.181048 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:47.205935 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:47.205990 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:47.205998 | orchestrator | + sleep 5 2026-04-13 00:39:52.209845 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:52.251336 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:52.251421 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:52.251461 | orchestrator | + sleep 5 2026-04-13 00:39:57.256340 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:57.291053 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:57.291137 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:57.291152 | orchestrator | + sleep 5 2026-04-13 00:40:02.295293 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:02.335203 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:02.335321 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:02.335344 | orchestrator | + sleep 5 2026-04-13 00:40:07.339060 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:07.373154 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:07.373223 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:07.373237 | orchestrator | + sleep 5 2026-04-13 00:40:12.376442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:12.411988 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:12.412104 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:12.412125 | orchestrator | + sleep 5 2026-04-13 00:40:17.414930 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:17.454567 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:17.454674 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:17.454716 | orchestrator | + sleep 5 2026-04-13 00:40:22.459316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:22.499472 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:22.499573 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:22.499588 | orchestrator | + sleep 5 2026-04-13 00:40:27.503661 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:27.546304 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:27.546440 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:27.546466 | orchestrator | + sleep 5 2026-04-13 00:40:32.550357 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:32.596397 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:32.596492 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:32.596507 | orchestrator | + sleep 5 2026-04-13 00:40:37.601520 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:37.637991 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:37.638165 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:37.638190 | orchestrator | + sleep 5 2026-04-13 00:40:42.643335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:42.680095 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:42.680188 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-13 00:40:42.680203 | orchestrator | + local max_attempts=60 2026-04-13 00:40:42.680215 | orchestrator | + local name=kolla-ansible 2026-04-13 00:40:42.680227 | orchestrator | + local attempt_num=1 2026-04-13 00:40:42.680903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-13 00:40:42.710643 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:42.710786 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-13 00:40:42.710812 | orchestrator | + local max_attempts=60 2026-04-13 00:40:42.710832 | orchestrator | + local name=osism-ansible 2026-04-13 00:40:42.710852 | orchestrator | + local attempt_num=1 2026-04-13 00:40:42.710872 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-13 00:40:42.742856 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:42.742939 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-13 00:40:42.742951 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-13 00:40:42.932812 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-13 00:40:43.086474 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-13 00:40:43.229458 | orchestrator | ARA in osism-ansible already disabled. 2026-04-13 00:40:43.390346 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-13 00:40:43.390701 | orchestrator | + osism apply gather-facts 2026-04-13 00:40:54.799990 | orchestrator | 2026-04-13 00:40:54 | INFO  | Prepare task for execution of gather-facts. 2026-04-13 00:40:54.883252 | orchestrator | 2026-04-13 00:40:54 | INFO  | Task e71a46a5-7403-438b-a6aa-cd5b6733ce8e (gather-facts) was prepared for execution. 2026-04-13 00:40:54.883340 | orchestrator | 2026-04-13 00:40:54 | INFO  | It takes a moment until task e71a46a5-7403-438b-a6aa-cd5b6733ce8e (gather-facts) has been started and output is visible here. 2026-04-13 00:41:04.667907 | orchestrator | 2026-04-13 00:41:04.667986 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:41:04.668001 | orchestrator | 2026-04-13 00:41:04.668012 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:41:04.668022 | orchestrator | Monday 13 April 2026 00:40:57 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-04-13 00:41:04.668032 | orchestrator | ok: [testbed-manager] 2026-04-13 00:41:04.668042 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:41:04.668051 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:41:04.668061 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:41:04.668070 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:41:04.668079 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:41:04.668089 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:41:04.668099 | orchestrator | 2026-04-13 00:41:04.668108 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-13 00:41:04.668118 | orchestrator | 2026-04-13 00:41:04.668128 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-13 00:41:04.668137 | orchestrator | Monday 13 April 2026 00:41:03 +0000 (0:00:06.055) 0:00:06.314 ********** 2026-04-13 00:41:04.668147 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:41:04.668156 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:41:04.668166 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:41:04.668175 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:41:04.668184 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:41:04.668193 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:41:04.668203 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:41:04.668212 | orchestrator | 2026-04-13 00:41:04.668222 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:41:04.668233 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:04.668252 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:04.668269 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:04.668287 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:04.668304 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:04.668320 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:04.668338 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:04.668357 | orchestrator | 2026-04-13 00:41:04.668375 | orchestrator | 2026-04-13 00:41:04.668393 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:41:04.668411 | orchestrator | Monday 13 April 2026 00:41:04 +0000 (0:00:00.550) 0:00:06.865 ********** 2026-04-13 00:41:04.668429 | orchestrator | =============================================================================== 2026-04-13 00:41:04.668446 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.06s 2026-04-13 00:41:04.668464 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-04-13 00:41:04.802289 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-13 00:41:04.811911 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-13 00:41:04.821475 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-13 00:41:04.837254 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-13 00:41:04.848618 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-13 00:41:04.858275 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-13 00:41:04.867643 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-13 00:41:04.876437 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-13 00:41:04.887680 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-13 00:41:04.900297 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-13 00:41:04.915640 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-13 00:41:04.927542 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-13 00:41:04.939004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-13 00:41:04.952719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-13 00:41:04.963849 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-13 00:41:04.973631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-13 00:41:04.981905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-13 00:41:04.991477 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-13 00:41:05.000798 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-13 00:41:05.011454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-13 00:41:05.027826 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-13 00:41:05.043554 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-13 00:41:05.054662 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-13 00:41:05.070918 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-13 00:41:05.223612 | orchestrator | ok: Runtime: 0:24:33.798974 2026-04-13 00:41:05.317262 | 2026-04-13 00:41:05.317398 | TASK [Deploy services] 2026-04-13 00:41:05.853980 | orchestrator | skipping: Conditional result was False 2026-04-13 00:41:05.865352 | 2026-04-13 00:41:05.865496 | TASK [Deploy in a nutshell] 2026-04-13 00:41:06.569115 | orchestrator | + set -e 2026-04-13 00:41:06.569370 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 00:41:06.569392 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 00:41:06.569414 | orchestrator | ++ INTERACTIVE=false 2026-04-13 00:41:06.569423 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 00:41:06.569439 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 00:41:06.569462 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 00:41:06.569503 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 00:41:06.569531 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 00:41:06.569541 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 00:41:06.569552 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 00:41:06.569560 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 00:41:06.569573 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 00:41:06.569581 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 00:41:06.569596 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 00:41:06.569603 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 00:41:06.569635 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 00:41:06.569677 | orchestrator | ++ export ARA=false 2026-04-13 00:41:06.569686 | orchestrator | ++ ARA=false 2026-04-13 00:41:06.569693 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 00:41:06.569702 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 00:41:06.569709 | orchestrator | ++ export TEMPEST=true 2026-04-13 00:41:06.569717 | orchestrator | ++ TEMPEST=true 2026-04-13 00:41:06.569724 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 00:41:06.569736 | orchestrator | ++ IS_ZUUL=true 2026-04-13 00:41:06.569744 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 00:41:06.569752 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 00:41:06.569760 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 00:41:06.569767 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 00:41:06.569775 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 00:41:06.569782 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 00:41:06.569790 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 00:41:06.569797 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 00:41:06.569805 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 00:41:06.569813 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 00:41:06.569821 | orchestrator | + echo 2026-04-13 00:41:06.569829 | orchestrator | 2026-04-13 00:41:06.569837 | orchestrator | # PULL IMAGES 2026-04-13 00:41:06.569844 | orchestrator | 2026-04-13 00:41:06.569852 | orchestrator | + echo '# PULL IMAGES' 2026-04-13 00:41:06.569859 | orchestrator | + echo 2026-04-13 00:41:06.571347 | orchestrator | ++ semver latest 7.0.0 2026-04-13 00:41:06.614445 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 00:41:06.614545 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 00:41:06.614584 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-13 00:41:07.889451 | orchestrator | 2026-04-13 00:41:07 | INFO  | Trying to run play pull-images in environment custom 2026-04-13 00:41:17.952537 | orchestrator | 2026-04-13 00:41:17 | INFO  | Prepare task for execution of pull-images. 2026-04-13 00:41:18.043429 | orchestrator | 2026-04-13 00:41:18 | INFO  | Task 46c50509-0e85-48ee-b1c4-6e894beaf777 (pull-images) was prepared for execution. 2026-04-13 00:41:18.043543 | orchestrator | 2026-04-13 00:41:18 | INFO  | Task 46c50509-0e85-48ee-b1c4-6e894beaf777 is running in background. No more output. Check ARA for logs. 2026-04-13 00:41:19.700946 | orchestrator | 2026-04-13 00:41:19 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-13 00:41:29.828935 | orchestrator | 2026-04-13 00:41:29 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-13 00:41:29.906837 | orchestrator | 2026-04-13 00:41:29 | INFO  | Task 8f1b1824-fc54-4c3f-bf03-cf29f6d88bae (wipe-partitions) was prepared for execution. 2026-04-13 00:41:29.907153 | orchestrator | 2026-04-13 00:41:29 | INFO  | It takes a moment until task 8f1b1824-fc54-4c3f-bf03-cf29f6d88bae (wipe-partitions) has been started and output is visible here. 2026-04-13 00:41:41.929199 | orchestrator | 2026-04-13 00:41:41.929307 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-13 00:41:41.929323 | orchestrator | 2026-04-13 00:41:41.929335 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-13 00:41:41.929353 | orchestrator | Monday 13 April 2026 00:41:33 +0000 (0:00:00.197) 0:00:00.197 ********** 2026-04-13 00:41:41.929392 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:41:41.929405 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:41:41.929416 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:41:41.929426 | orchestrator | 2026-04-13 00:41:41.929436 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-13 00:41:41.929447 | orchestrator | Monday 13 April 2026 00:41:34 +0000 (0:00:00.974) 0:00:01.172 ********** 2026-04-13 00:41:41.929462 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:41:41.929473 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:41:41.929484 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:41:41.929494 | orchestrator | 2026-04-13 00:41:41.929504 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-13 00:41:41.929514 | orchestrator | Monday 13 April 2026 00:41:34 +0000 (0:00:00.255) 0:00:01.428 ********** 2026-04-13 00:41:41.929525 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:41:41.929536 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:41:41.929546 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:41:41.929557 | orchestrator | 2026-04-13 00:41:41.929567 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-13 00:41:41.929577 | orchestrator | Monday 13 April 2026 00:41:35 +0000 (0:00:00.625) 0:00:02.053 ********** 2026-04-13 00:41:41.929587 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:41:41.929598 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:41:41.929608 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:41:41.929618 | orchestrator | 2026-04-13 00:41:41.929705 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-13 00:41:41.929716 | orchestrator | Monday 13 April 2026 00:41:35 +0000 (0:00:00.256) 0:00:02.310 ********** 2026-04-13 00:41:41.929726 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-13 00:41:41.929741 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-13 00:41:41.929753 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-13 00:41:41.929766 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-13 00:41:41.929778 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-13 00:41:41.929789 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-13 00:41:41.929800 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-13 00:41:41.929810 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-13 00:41:41.929820 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-13 00:41:41.929834 | orchestrator | 2026-04-13 00:41:41.929845 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-13 00:41:41.929858 | orchestrator | Monday 13 April 2026 00:41:36 +0000 (0:00:01.342) 0:00:03.652 ********** 2026-04-13 00:41:41.929871 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-13 00:41:41.929882 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-13 00:41:41.929895 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-13 00:41:41.929906 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-13 00:41:41.929916 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-13 00:41:41.929926 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-13 00:41:41.929937 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-13 00:41:41.929949 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-13 00:41:41.929963 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-13 00:41:41.929974 | orchestrator | 2026-04-13 00:41:41.929993 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-13 00:41:41.930003 | orchestrator | Monday 13 April 2026 00:41:38 +0000 (0:00:01.314) 0:00:04.967 ********** 2026-04-13 00:41:41.930075 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-13 00:41:41.930091 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-13 00:41:41.930101 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-13 00:41:41.930113 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-13 00:41:41.930135 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-13 00:41:41.930145 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-13 00:41:41.930155 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-13 00:41:41.930165 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-13 00:41:41.930175 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-13 00:41:41.930185 | orchestrator | 2026-04-13 00:41:41.930195 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-13 00:41:41.930205 | orchestrator | Monday 13 April 2026 00:41:40 +0000 (0:00:02.218) 0:00:07.185 ********** 2026-04-13 00:41:41.930216 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:41:41.930226 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:41:41.930235 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:41:41.930244 | orchestrator | 2026-04-13 00:41:41.930254 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-13 00:41:41.930262 | orchestrator | Monday 13 April 2026 00:41:40 +0000 (0:00:00.576) 0:00:07.762 ********** 2026-04-13 00:41:41.930272 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:41:41.930281 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:41:41.930291 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:41:41.930329 | orchestrator | 2026-04-13 00:41:41.930340 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:41:41.930351 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:41:41.930363 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:41:41.930396 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:41:41.930408 | orchestrator | 2026-04-13 00:41:41.930418 | orchestrator | 2026-04-13 00:41:41.930428 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:41:41.930439 | orchestrator | Monday 13 April 2026 00:41:41 +0000 (0:00:00.812) 0:00:08.574 ********** 2026-04-13 00:41:41.930449 | orchestrator | =============================================================================== 2026-04-13 00:41:41.930459 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.22s 2026-04-13 00:41:41.930470 | orchestrator | Check device availability ----------------------------------------------- 1.34s 2026-04-13 00:41:41.930480 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2026-04-13 00:41:41.930491 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.98s 2026-04-13 00:41:41.930501 | orchestrator | Request device events from the kernel ----------------------------------- 0.81s 2026-04-13 00:41:41.930511 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2026-04-13 00:41:41.930522 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-04-13 00:41:41.930533 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-04-13 00:41:41.930543 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2026-04-13 00:41:53.621558 | orchestrator | 2026-04-13 00:41:53 | INFO  | Prepare task for execution of facts. 2026-04-13 00:41:53.708702 | orchestrator | 2026-04-13 00:41:53 | INFO  | Task d6896aca-0976-4d9b-b238-2ad84b3da1bc (facts) was prepared for execution. 2026-04-13 00:41:53.708791 | orchestrator | 2026-04-13 00:41:53 | INFO  | It takes a moment until task d6896aca-0976-4d9b-b238-2ad84b3da1bc (facts) has been started and output is visible here. 2026-04-13 00:42:05.716065 | orchestrator | 2026-04-13 00:42:05.716145 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-13 00:42:05.716155 | orchestrator | 2026-04-13 00:42:05.716174 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-13 00:42:05.716180 | orchestrator | Monday 13 April 2026 00:41:57 +0000 (0:00:00.381) 0:00:00.381 ********** 2026-04-13 00:42:05.716186 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:42:05.716193 | orchestrator | ok: [testbed-manager] 2026-04-13 00:42:05.716199 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:42:05.716204 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:42:05.716210 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:05.716216 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:05.716221 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:05.716227 | orchestrator | 2026-04-13 00:42:05.716233 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-13 00:42:05.716239 | orchestrator | Monday 13 April 2026 00:41:58 +0000 (0:00:01.406) 0:00:01.788 ********** 2026-04-13 00:42:05.716244 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:42:05.716251 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:42:05.716256 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:42:05.716262 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:42:05.716268 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:05.716273 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:05.716279 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:05.716285 | orchestrator | 2026-04-13 00:42:05.716290 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:42:05.716302 | orchestrator | 2026-04-13 00:42:05.716308 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:42:05.716315 | orchestrator | Monday 13 April 2026 00:41:59 +0000 (0:00:01.285) 0:00:03.073 ********** 2026-04-13 00:42:05.716321 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:42:05.716326 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:42:05.716332 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:42:05.716338 | orchestrator | ok: [testbed-manager] 2026-04-13 00:42:05.716343 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:05.716349 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:05.716355 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:05.716360 | orchestrator | 2026-04-13 00:42:05.716366 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-13 00:42:05.716372 | orchestrator | 2026-04-13 00:42:05.716377 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-13 00:42:05.716383 | orchestrator | Monday 13 April 2026 00:42:04 +0000 (0:00:04.949) 0:00:08.023 ********** 2026-04-13 00:42:05.716389 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:42:05.716395 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:42:05.716400 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:42:05.716406 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:42:05.716411 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:05.716417 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:05.716423 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:05.716428 | orchestrator | 2026-04-13 00:42:05.716434 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:42:05.716440 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.716447 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.716453 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.716458 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.716464 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.716473 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.716479 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.716485 | orchestrator | 2026-04-13 00:42:05.716490 | orchestrator | 2026-04-13 00:42:05.716496 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:42:05.716502 | orchestrator | Monday 13 April 2026 00:42:05 +0000 (0:00:00.492) 0:00:08.516 ********** 2026-04-13 00:42:05.716507 | orchestrator | =============================================================================== 2026-04-13 00:42:05.716513 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.95s 2026-04-13 00:42:05.716519 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.41s 2026-04-13 00:42:05.716525 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2026-04-13 00:42:05.716530 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-04-13 00:42:07.274695 | orchestrator | 2026-04-13 00:42:07 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-13 00:42:07.338821 | orchestrator | 2026-04-13 00:42:07 | INFO  | Task d622a2d1-b023-412f-9cdd-d7bd2331d943 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-13 00:42:07.338924 | orchestrator | 2026-04-13 00:42:07 | INFO  | It takes a moment until task d622a2d1-b023-412f-9cdd-d7bd2331d943 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-13 00:42:19.600161 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 00:42:19.600256 | orchestrator | 2.16.14 2026-04-13 00:42:19.600270 | orchestrator | 2026-04-13 00:42:19.600280 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-13 00:42:19.600289 | orchestrator | 2026-04-13 00:42:19.600298 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:42:19.600306 | orchestrator | Monday 13 April 2026 00:42:11 +0000 (0:00:00.296) 0:00:00.296 ********** 2026-04-13 00:42:19.600315 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:19.600324 | orchestrator | 2026-04-13 00:42:19.600332 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:42:19.600340 | orchestrator | Monday 13 April 2026 00:42:12 +0000 (0:00:00.229) 0:00:00.526 ********** 2026-04-13 00:42:19.600349 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:19.600358 | orchestrator | 2026-04-13 00:42:19.600366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600374 | orchestrator | Monday 13 April 2026 00:42:12 +0000 (0:00:00.234) 0:00:00.760 ********** 2026-04-13 00:42:19.600393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:42:19.600401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:42:19.600410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:42:19.600418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:42:19.600426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:42:19.600434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:42:19.600442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:42:19.600450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:42:19.600458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-13 00:42:19.600467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:42:19.600492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:42:19.600500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:42:19.600508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:42:19.600516 | orchestrator | 2026-04-13 00:42:19.600524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600533 | orchestrator | Monday 13 April 2026 00:42:12 +0000 (0:00:00.378) 0:00:01.139 ********** 2026-04-13 00:42:19.600541 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600549 | orchestrator | 2026-04-13 00:42:19.600557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600565 | orchestrator | Monday 13 April 2026 00:42:13 +0000 (0:00:00.524) 0:00:01.664 ********** 2026-04-13 00:42:19.600573 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600581 | orchestrator | 2026-04-13 00:42:19.600589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600637 | orchestrator | Monday 13 April 2026 00:42:13 +0000 (0:00:00.203) 0:00:01.868 ********** 2026-04-13 00:42:19.600646 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600654 | orchestrator | 2026-04-13 00:42:19.600662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600671 | orchestrator | Monday 13 April 2026 00:42:13 +0000 (0:00:00.190) 0:00:02.058 ********** 2026-04-13 00:42:19.600679 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600687 | orchestrator | 2026-04-13 00:42:19.600696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600706 | orchestrator | Monday 13 April 2026 00:42:13 +0000 (0:00:00.211) 0:00:02.269 ********** 2026-04-13 00:42:19.600715 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600724 | orchestrator | 2026-04-13 00:42:19.600734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600743 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.205) 0:00:02.475 ********** 2026-04-13 00:42:19.600753 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600762 | orchestrator | 2026-04-13 00:42:19.600771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600780 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.191) 0:00:02.666 ********** 2026-04-13 00:42:19.600789 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600799 | orchestrator | 2026-04-13 00:42:19.600808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600817 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.202) 0:00:02.868 ********** 2026-04-13 00:42:19.600827 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.600836 | orchestrator | 2026-04-13 00:42:19.600845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600854 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.198) 0:00:03.067 ********** 2026-04-13 00:42:19.600862 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3) 2026-04-13 00:42:19.600871 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3) 2026-04-13 00:42:19.600879 | orchestrator | 2026-04-13 00:42:19.600887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600909 | orchestrator | Monday 13 April 2026 00:42:15 +0000 (0:00:00.404) 0:00:03.472 ********** 2026-04-13 00:42:19.600918 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77) 2026-04-13 00:42:19.600926 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77) 2026-04-13 00:42:19.600934 | orchestrator | 2026-04-13 00:42:19.600949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.600964 | orchestrator | Monday 13 April 2026 00:42:15 +0000 (0:00:00.407) 0:00:03.880 ********** 2026-04-13 00:42:19.600972 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f) 2026-04-13 00:42:19.600980 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f) 2026-04-13 00:42:19.600989 | orchestrator | 2026-04-13 00:42:19.600997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.601005 | orchestrator | Monday 13 April 2026 00:42:16 +0000 (0:00:00.636) 0:00:04.517 ********** 2026-04-13 00:42:19.601013 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05) 2026-04-13 00:42:19.601021 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05) 2026-04-13 00:42:19.601029 | orchestrator | 2026-04-13 00:42:19.601038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.601046 | orchestrator | Monday 13 April 2026 00:42:16 +0000 (0:00:00.678) 0:00:05.195 ********** 2026-04-13 00:42:19.601054 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:42:19.601062 | orchestrator | 2026-04-13 00:42:19.601070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601078 | orchestrator | Monday 13 April 2026 00:42:17 +0000 (0:00:00.837) 0:00:06.033 ********** 2026-04-13 00:42:19.601086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:42:19.601094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:42:19.601102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:42:19.601110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:42:19.601119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:42:19.601127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:42:19.601135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:42:19.601143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:42:19.601151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-13 00:42:19.601159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:42:19.601167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:42:19.601176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:42:19.601184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:42:19.601192 | orchestrator | 2026-04-13 00:42:19.601200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601208 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.390) 0:00:06.424 ********** 2026-04-13 00:42:19.601216 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.601224 | orchestrator | 2026-04-13 00:42:19.601232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601240 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.205) 0:00:06.629 ********** 2026-04-13 00:42:19.601248 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.601256 | orchestrator | 2026-04-13 00:42:19.601265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601273 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.216) 0:00:06.846 ********** 2026-04-13 00:42:19.601281 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.601294 | orchestrator | 2026-04-13 00:42:19.601303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601311 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.234) 0:00:07.080 ********** 2026-04-13 00:42:19.601319 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.601327 | orchestrator | 2026-04-13 00:42:19.601335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601343 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.260) 0:00:07.340 ********** 2026-04-13 00:42:19.601351 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.601359 | orchestrator | 2026-04-13 00:42:19.601367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601375 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.195) 0:00:07.536 ********** 2026-04-13 00:42:19.601383 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.601392 | orchestrator | 2026-04-13 00:42:19.601400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.601408 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.184) 0:00:07.720 ********** 2026-04-13 00:42:19.601416 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.601424 | orchestrator | 2026-04-13 00:42:19.601437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.601832 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.187) 0:00:07.908 ********** 2026-04-13 00:42:27.601948 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.601967 | orchestrator | 2026-04-13 00:42:27.601981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.601992 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.210) 0:00:08.119 ********** 2026-04-13 00:42:27.602004 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-13 00:42:27.602128 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-13 00:42:27.602164 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-13 00:42:27.602183 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-13 00:42:27.602200 | orchestrator | 2026-04-13 00:42:27.602219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.602260 | orchestrator | Monday 13 April 2026 00:42:20 +0000 (0:00:01.050) 0:00:09.169 ********** 2026-04-13 00:42:27.602280 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.602297 | orchestrator | 2026-04-13 00:42:27.602316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.602336 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.199) 0:00:09.369 ********** 2026-04-13 00:42:27.602356 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.602375 | orchestrator | 2026-04-13 00:42:27.602395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.602413 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.189) 0:00:09.558 ********** 2026-04-13 00:42:27.602434 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.602455 | orchestrator | 2026-04-13 00:42:27.602474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.602494 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.190) 0:00:09.748 ********** 2026-04-13 00:42:27.602508 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.602521 | orchestrator | 2026-04-13 00:42:27.602534 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-13 00:42:27.602549 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.241) 0:00:09.990 ********** 2026-04-13 00:42:27.602562 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-13 00:42:27.602575 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-13 00:42:27.602637 | orchestrator | 2026-04-13 00:42:27.602661 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-13 00:42:27.602680 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.199) 0:00:10.189 ********** 2026-04-13 00:42:27.602721 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.602733 | orchestrator | 2026-04-13 00:42:27.602744 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-13 00:42:27.602756 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.142) 0:00:10.331 ********** 2026-04-13 00:42:27.602767 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.602778 | orchestrator | 2026-04-13 00:42:27.602789 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-13 00:42:27.602801 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.144) 0:00:10.475 ********** 2026-04-13 00:42:27.602812 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.602824 | orchestrator | 2026-04-13 00:42:27.602835 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-13 00:42:27.602846 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.139) 0:00:10.615 ********** 2026-04-13 00:42:27.602858 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:27.602869 | orchestrator | 2026-04-13 00:42:27.602880 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-13 00:42:27.602891 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.146) 0:00:10.761 ********** 2026-04-13 00:42:27.602904 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '273f60d0-eab1-5837-bb33-0c04c9e5b829'}}) 2026-04-13 00:42:27.602916 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f99b2314-ad51-5797-a71e-17207c9800e6'}}) 2026-04-13 00:42:27.602927 | orchestrator | 2026-04-13 00:42:27.602939 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-13 00:42:27.602951 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.179) 0:00:10.940 ********** 2026-04-13 00:42:27.602963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '273f60d0-eab1-5837-bb33-0c04c9e5b829'}})  2026-04-13 00:42:27.602984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f99b2314-ad51-5797-a71e-17207c9800e6'}})  2026-04-13 00:42:27.603002 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603013 | orchestrator | 2026-04-13 00:42:27.603025 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-13 00:42:27.603036 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.160) 0:00:11.101 ********** 2026-04-13 00:42:27.603048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '273f60d0-eab1-5837-bb33-0c04c9e5b829'}})  2026-04-13 00:42:27.603059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f99b2314-ad51-5797-a71e-17207c9800e6'}})  2026-04-13 00:42:27.603071 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603082 | orchestrator | 2026-04-13 00:42:27.603093 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-13 00:42:27.603105 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.380) 0:00:11.481 ********** 2026-04-13 00:42:27.603116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '273f60d0-eab1-5837-bb33-0c04c9e5b829'}})  2026-04-13 00:42:27.603148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f99b2314-ad51-5797-a71e-17207c9800e6'}})  2026-04-13 00:42:27.603160 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603172 | orchestrator | 2026-04-13 00:42:27.603183 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-13 00:42:27.603195 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.162) 0:00:11.644 ********** 2026-04-13 00:42:27.603207 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:27.603218 | orchestrator | 2026-04-13 00:42:27.603230 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-13 00:42:27.603241 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.143) 0:00:11.788 ********** 2026-04-13 00:42:27.603252 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:27.603272 | orchestrator | 2026-04-13 00:42:27.603284 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-13 00:42:27.603296 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.163) 0:00:11.952 ********** 2026-04-13 00:42:27.603307 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603319 | orchestrator | 2026-04-13 00:42:27.603331 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-13 00:42:27.603342 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.136) 0:00:12.089 ********** 2026-04-13 00:42:27.603354 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603365 | orchestrator | 2026-04-13 00:42:27.603377 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-13 00:42:27.603388 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.138) 0:00:12.227 ********** 2026-04-13 00:42:27.603400 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603411 | orchestrator | 2026-04-13 00:42:27.603422 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-13 00:42:27.603434 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.125) 0:00:12.352 ********** 2026-04-13 00:42:27.603445 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:42:27.603457 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:27.603469 | orchestrator |  "sdb": { 2026-04-13 00:42:27.603480 | orchestrator |  "osd_lvm_uuid": "273f60d0-eab1-5837-bb33-0c04c9e5b829" 2026-04-13 00:42:27.603492 | orchestrator |  }, 2026-04-13 00:42:27.603503 | orchestrator |  "sdc": { 2026-04-13 00:42:27.603515 | orchestrator |  "osd_lvm_uuid": "f99b2314-ad51-5797-a71e-17207c9800e6" 2026-04-13 00:42:27.603526 | orchestrator |  } 2026-04-13 00:42:27.603538 | orchestrator |  } 2026-04-13 00:42:27.603549 | orchestrator | } 2026-04-13 00:42:27.603561 | orchestrator | 2026-04-13 00:42:27.603573 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-13 00:42:27.603604 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.140) 0:00:12.492 ********** 2026-04-13 00:42:27.603625 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603643 | orchestrator | 2026-04-13 00:42:27.603661 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-13 00:42:27.603680 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.139) 0:00:12.632 ********** 2026-04-13 00:42:27.603700 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603719 | orchestrator | 2026-04-13 00:42:27.603739 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-13 00:42:27.603755 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.132) 0:00:12.765 ********** 2026-04-13 00:42:27.603766 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.603777 | orchestrator | 2026-04-13 00:42:27.603788 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-13 00:42:27.603800 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.129) 0:00:12.895 ********** 2026-04-13 00:42:27.603811 | orchestrator | changed: [testbed-node-3] => { 2026-04-13 00:42:27.603822 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-13 00:42:27.603833 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:27.603845 | orchestrator |  "sdb": { 2026-04-13 00:42:27.603856 | orchestrator |  "osd_lvm_uuid": "273f60d0-eab1-5837-bb33-0c04c9e5b829" 2026-04-13 00:42:27.603867 | orchestrator |  }, 2026-04-13 00:42:27.603878 | orchestrator |  "sdc": { 2026-04-13 00:42:27.603889 | orchestrator |  "osd_lvm_uuid": "f99b2314-ad51-5797-a71e-17207c9800e6" 2026-04-13 00:42:27.603901 | orchestrator |  } 2026-04-13 00:42:27.603912 | orchestrator |  }, 2026-04-13 00:42:27.603923 | orchestrator |  "lvm_volumes": [ 2026-04-13 00:42:27.603934 | orchestrator |  { 2026-04-13 00:42:27.603946 | orchestrator |  "data": "osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829", 2026-04-13 00:42:27.603957 | orchestrator |  "data_vg": "ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829" 2026-04-13 00:42:27.603977 | orchestrator |  }, 2026-04-13 00:42:27.603989 | orchestrator |  { 2026-04-13 00:42:27.604000 | orchestrator |  "data": "osd-block-f99b2314-ad51-5797-a71e-17207c9800e6", 2026-04-13 00:42:27.604011 | orchestrator |  "data_vg": "ceph-f99b2314-ad51-5797-a71e-17207c9800e6" 2026-04-13 00:42:27.604023 | orchestrator |  } 2026-04-13 00:42:27.604034 | orchestrator |  ] 2026-04-13 00:42:27.604045 | orchestrator |  } 2026-04-13 00:42:27.604056 | orchestrator | } 2026-04-13 00:42:27.604068 | orchestrator | 2026-04-13 00:42:27.604079 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-13 00:42:27.604090 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.200) 0:00:13.096 ********** 2026-04-13 00:42:27.604101 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:27.604112 | orchestrator | 2026-04-13 00:42:27.604124 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-13 00:42:27.604135 | orchestrator | 2026-04-13 00:42:27.604146 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:42:27.604157 | orchestrator | Monday 13 April 2026 00:42:27 +0000 (0:00:02.285) 0:00:15.381 ********** 2026-04-13 00:42:27.604169 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:27.604180 | orchestrator | 2026-04-13 00:42:27.604191 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:42:27.604202 | orchestrator | Monday 13 April 2026 00:42:27 +0000 (0:00:00.301) 0:00:15.683 ********** 2026-04-13 00:42:27.604214 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:27.604225 | orchestrator | 2026-04-13 00:42:27.604244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.555391 | orchestrator | Monday 13 April 2026 00:42:27 +0000 (0:00:00.231) 0:00:15.914 ********** 2026-04-13 00:42:35.555531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:42:35.555555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:42:35.555568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:42:35.555577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:42:35.555669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:42:35.555680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:42:35.555689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:42:35.555704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:42:35.555714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-13 00:42:35.555724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:42:35.555733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:42:35.555743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:42:35.555769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:42:35.555779 | orchestrator | 2026-04-13 00:42:35.555788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.555798 | orchestrator | Monday 13 April 2026 00:42:27 +0000 (0:00:00.391) 0:00:16.306 ********** 2026-04-13 00:42:35.555807 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.555817 | orchestrator | 2026-04-13 00:42:35.555826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.555835 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.212) 0:00:16.518 ********** 2026-04-13 00:42:35.555865 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.555875 | orchestrator | 2026-04-13 00:42:35.555884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.555893 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.200) 0:00:16.719 ********** 2026-04-13 00:42:35.555902 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.555911 | orchestrator | 2026-04-13 00:42:35.555920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.555929 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.203) 0:00:16.922 ********** 2026-04-13 00:42:35.555939 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.555949 | orchestrator | 2026-04-13 00:42:35.555959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.555969 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.195) 0:00:17.118 ********** 2026-04-13 00:42:35.555979 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.555989 | orchestrator | 2026-04-13 00:42:35.556000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556010 | orchestrator | Monday 13 April 2026 00:42:29 +0000 (0:00:00.697) 0:00:17.816 ********** 2026-04-13 00:42:35.556019 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556028 | orchestrator | 2026-04-13 00:42:35.556037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556046 | orchestrator | Monday 13 April 2026 00:42:29 +0000 (0:00:00.186) 0:00:18.002 ********** 2026-04-13 00:42:35.556055 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556064 | orchestrator | 2026-04-13 00:42:35.556073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556082 | orchestrator | Monday 13 April 2026 00:42:29 +0000 (0:00:00.216) 0:00:18.219 ********** 2026-04-13 00:42:35.556091 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556100 | orchestrator | 2026-04-13 00:42:35.556109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556118 | orchestrator | Monday 13 April 2026 00:42:30 +0000 (0:00:00.198) 0:00:18.418 ********** 2026-04-13 00:42:35.556127 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7) 2026-04-13 00:42:35.556137 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7) 2026-04-13 00:42:35.556146 | orchestrator | 2026-04-13 00:42:35.556155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556164 | orchestrator | Monday 13 April 2026 00:42:30 +0000 (0:00:00.419) 0:00:18.837 ********** 2026-04-13 00:42:35.556173 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605) 2026-04-13 00:42:35.556182 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605) 2026-04-13 00:42:35.556191 | orchestrator | 2026-04-13 00:42:35.556200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556209 | orchestrator | Monday 13 April 2026 00:42:30 +0000 (0:00:00.413) 0:00:19.251 ********** 2026-04-13 00:42:35.556217 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194) 2026-04-13 00:42:35.556226 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194) 2026-04-13 00:42:35.556235 | orchestrator | 2026-04-13 00:42:35.556245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556271 | orchestrator | Monday 13 April 2026 00:42:31 +0000 (0:00:00.419) 0:00:19.671 ********** 2026-04-13 00:42:35.556282 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e) 2026-04-13 00:42:35.556291 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e) 2026-04-13 00:42:35.556300 | orchestrator | 2026-04-13 00:42:35.556315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.556324 | orchestrator | Monday 13 April 2026 00:42:31 +0000 (0:00:00.431) 0:00:20.102 ********** 2026-04-13 00:42:35.556333 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:42:35.556342 | orchestrator | 2026-04-13 00:42:35.556351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556360 | orchestrator | Monday 13 April 2026 00:42:32 +0000 (0:00:00.351) 0:00:20.453 ********** 2026-04-13 00:42:35.556369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:42:35.556378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:42:35.556394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:42:35.556403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:42:35.556412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:42:35.556421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:42:35.556430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:42:35.556439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:42:35.556448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-13 00:42:35.556457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:42:35.556466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:42:35.556475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:42:35.556484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:42:35.556494 | orchestrator | 2026-04-13 00:42:35.556503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556512 | orchestrator | Monday 13 April 2026 00:42:32 +0000 (0:00:00.396) 0:00:20.850 ********** 2026-04-13 00:42:35.556521 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556530 | orchestrator | 2026-04-13 00:42:35.556539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556548 | orchestrator | Monday 13 April 2026 00:42:32 +0000 (0:00:00.202) 0:00:21.052 ********** 2026-04-13 00:42:35.556557 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556566 | orchestrator | 2026-04-13 00:42:35.556575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556634 | orchestrator | Monday 13 April 2026 00:42:33 +0000 (0:00:00.807) 0:00:21.860 ********** 2026-04-13 00:42:35.556644 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556653 | orchestrator | 2026-04-13 00:42:35.556662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556671 | orchestrator | Monday 13 April 2026 00:42:33 +0000 (0:00:00.208) 0:00:22.069 ********** 2026-04-13 00:42:35.556679 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556688 | orchestrator | 2026-04-13 00:42:35.556697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556706 | orchestrator | Monday 13 April 2026 00:42:33 +0000 (0:00:00.200) 0:00:22.269 ********** 2026-04-13 00:42:35.556715 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556724 | orchestrator | 2026-04-13 00:42:35.556736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556752 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.200) 0:00:22.470 ********** 2026-04-13 00:42:35.556766 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556789 | orchestrator | 2026-04-13 00:42:35.556804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556817 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.209) 0:00:22.680 ********** 2026-04-13 00:42:35.556830 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556844 | orchestrator | 2026-04-13 00:42:35.556859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556870 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.188) 0:00:22.868 ********** 2026-04-13 00:42:35.556879 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.556888 | orchestrator | 2026-04-13 00:42:35.556897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556906 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.206) 0:00:23.075 ********** 2026-04-13 00:42:35.556915 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-13 00:42:35.556924 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-13 00:42:35.556939 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-13 00:42:35.556952 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-13 00:42:35.556965 | orchestrator | 2026-04-13 00:42:35.556979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.556993 | orchestrator | Monday 13 April 2026 00:42:35 +0000 (0:00:00.678) 0:00:23.753 ********** 2026-04-13 00:42:35.557006 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.919898 | orchestrator | 2026-04-13 00:42:42.919978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:42.919987 | orchestrator | Monday 13 April 2026 00:42:35 +0000 (0:00:00.199) 0:00:23.952 ********** 2026-04-13 00:42:42.919993 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920000 | orchestrator | 2026-04-13 00:42:42.920006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:42.920012 | orchestrator | Monday 13 April 2026 00:42:35 +0000 (0:00:00.204) 0:00:24.157 ********** 2026-04-13 00:42:42.920017 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920022 | orchestrator | 2026-04-13 00:42:42.920028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:42.920033 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.189) 0:00:24.347 ********** 2026-04-13 00:42:42.920038 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920043 | orchestrator | 2026-04-13 00:42:42.920048 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-13 00:42:42.920053 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.195) 0:00:24.542 ********** 2026-04-13 00:42:42.920058 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-13 00:42:42.920063 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-13 00:42:42.920068 | orchestrator | 2026-04-13 00:42:42.920073 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-13 00:42:42.920091 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.416) 0:00:24.959 ********** 2026-04-13 00:42:42.920097 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920102 | orchestrator | 2026-04-13 00:42:42.920107 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-13 00:42:42.920112 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.124) 0:00:25.083 ********** 2026-04-13 00:42:42.920117 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920122 | orchestrator | 2026-04-13 00:42:42.920127 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-13 00:42:42.920135 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.122) 0:00:25.206 ********** 2026-04-13 00:42:42.920140 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920145 | orchestrator | 2026-04-13 00:42:42.920150 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-13 00:42:42.920155 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.216) 0:00:25.422 ********** 2026-04-13 00:42:42.920177 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:42.920183 | orchestrator | 2026-04-13 00:42:42.920188 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-13 00:42:42.920193 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.154) 0:00:25.577 ********** 2026-04-13 00:42:42.920199 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '976187fe-8802-504d-92cd-339995e22605'}}) 2026-04-13 00:42:42.920205 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204a2e69-8032-57e4-80e8-bdb37f98e657'}}) 2026-04-13 00:42:42.920210 | orchestrator | 2026-04-13 00:42:42.920215 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-13 00:42:42.920220 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.173) 0:00:25.751 ********** 2026-04-13 00:42:42.920225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '976187fe-8802-504d-92cd-339995e22605'}})  2026-04-13 00:42:42.920232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204a2e69-8032-57e4-80e8-bdb37f98e657'}})  2026-04-13 00:42:42.920237 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920242 | orchestrator | 2026-04-13 00:42:42.920247 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-13 00:42:42.920252 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.155) 0:00:25.906 ********** 2026-04-13 00:42:42.920257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '976187fe-8802-504d-92cd-339995e22605'}})  2026-04-13 00:42:42.920262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204a2e69-8032-57e4-80e8-bdb37f98e657'}})  2026-04-13 00:42:42.920267 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920272 | orchestrator | 2026-04-13 00:42:42.920277 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-13 00:42:42.920282 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.161) 0:00:26.068 ********** 2026-04-13 00:42:42.920287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '976187fe-8802-504d-92cd-339995e22605'}})  2026-04-13 00:42:42.920292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204a2e69-8032-57e4-80e8-bdb37f98e657'}})  2026-04-13 00:42:42.920297 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920302 | orchestrator | 2026-04-13 00:42:42.920307 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-13 00:42:42.920312 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.234) 0:00:26.302 ********** 2026-04-13 00:42:42.920317 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:42.920322 | orchestrator | 2026-04-13 00:42:42.920327 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-13 00:42:42.920332 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.146) 0:00:26.449 ********** 2026-04-13 00:42:42.920337 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:42.920342 | orchestrator | 2026-04-13 00:42:42.920347 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-13 00:42:42.920352 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.174) 0:00:26.624 ********** 2026-04-13 00:42:42.920368 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920373 | orchestrator | 2026-04-13 00:42:42.920378 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-13 00:42:42.920383 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.142) 0:00:26.766 ********** 2026-04-13 00:42:42.920388 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920393 | orchestrator | 2026-04-13 00:42:42.920398 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-13 00:42:42.920403 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.532) 0:00:27.298 ********** 2026-04-13 00:42:42.920408 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920419 | orchestrator | 2026-04-13 00:42:42.920424 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-13 00:42:42.920429 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.251) 0:00:27.549 ********** 2026-04-13 00:42:42.920434 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:42:42.920439 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:42.920444 | orchestrator |  "sdb": { 2026-04-13 00:42:42.920449 | orchestrator |  "osd_lvm_uuid": "976187fe-8802-504d-92cd-339995e22605" 2026-04-13 00:42:42.920454 | orchestrator |  }, 2026-04-13 00:42:42.920460 | orchestrator |  "sdc": { 2026-04-13 00:42:42.920466 | orchestrator |  "osd_lvm_uuid": "204a2e69-8032-57e4-80e8-bdb37f98e657" 2026-04-13 00:42:42.920471 | orchestrator |  } 2026-04-13 00:42:42.920477 | orchestrator |  } 2026-04-13 00:42:42.920482 | orchestrator | } 2026-04-13 00:42:42.920488 | orchestrator | 2026-04-13 00:42:42.920494 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-13 00:42:42.920499 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.174) 0:00:27.724 ********** 2026-04-13 00:42:42.920505 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920510 | orchestrator | 2026-04-13 00:42:42.920516 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-13 00:42:42.920522 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.143) 0:00:27.868 ********** 2026-04-13 00:42:42.920528 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920533 | orchestrator | 2026-04-13 00:42:42.920539 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-13 00:42:42.920544 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.136) 0:00:28.004 ********** 2026-04-13 00:42:42.920550 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.920555 | orchestrator | 2026-04-13 00:42:42.920561 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-13 00:42:42.920570 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.138) 0:00:28.143 ********** 2026-04-13 00:42:42.920611 | orchestrator | changed: [testbed-node-4] => { 2026-04-13 00:42:42.920617 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-13 00:42:42.920622 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:42.920628 | orchestrator |  "sdb": { 2026-04-13 00:42:42.920634 | orchestrator |  "osd_lvm_uuid": "976187fe-8802-504d-92cd-339995e22605" 2026-04-13 00:42:42.920640 | orchestrator |  }, 2026-04-13 00:42:42.920646 | orchestrator |  "sdc": { 2026-04-13 00:42:42.920651 | orchestrator |  "osd_lvm_uuid": "204a2e69-8032-57e4-80e8-bdb37f98e657" 2026-04-13 00:42:42.920657 | orchestrator |  } 2026-04-13 00:42:42.920662 | orchestrator |  }, 2026-04-13 00:42:42.920668 | orchestrator |  "lvm_volumes": [ 2026-04-13 00:42:42.920674 | orchestrator |  { 2026-04-13 00:42:42.920679 | orchestrator |  "data": "osd-block-976187fe-8802-504d-92cd-339995e22605", 2026-04-13 00:42:42.920685 | orchestrator |  "data_vg": "ceph-976187fe-8802-504d-92cd-339995e22605" 2026-04-13 00:42:42.920691 | orchestrator |  }, 2026-04-13 00:42:42.920696 | orchestrator |  { 2026-04-13 00:42:42.920702 | orchestrator |  "data": "osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657", 2026-04-13 00:42:42.920707 | orchestrator |  "data_vg": "ceph-204a2e69-8032-57e4-80e8-bdb37f98e657" 2026-04-13 00:42:42.920713 | orchestrator |  } 2026-04-13 00:42:42.920719 | orchestrator |  ] 2026-04-13 00:42:42.920724 | orchestrator |  } 2026-04-13 00:42:42.920730 | orchestrator | } 2026-04-13 00:42:42.920735 | orchestrator | 2026-04-13 00:42:42.920741 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-13 00:42:42.920746 | orchestrator | Monday 13 April 2026 00:42:40 +0000 (0:00:00.228) 0:00:28.372 ********** 2026-04-13 00:42:42.920751 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:42.920756 | orchestrator | 2026-04-13 00:42:42.920766 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-13 00:42:42.920771 | orchestrator | 2026-04-13 00:42:42.920776 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:42:42.920781 | orchestrator | Monday 13 April 2026 00:42:41 +0000 (0:00:01.199) 0:00:29.571 ********** 2026-04-13 00:42:42.920786 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:42.920791 | orchestrator | 2026-04-13 00:42:42.920796 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:42:42.920801 | orchestrator | Monday 13 April 2026 00:42:41 +0000 (0:00:00.541) 0:00:30.112 ********** 2026-04-13 00:42:42.920806 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:42.920811 | orchestrator | 2026-04-13 00:42:42.920825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:42.920830 | orchestrator | Monday 13 April 2026 00:42:42 +0000 (0:00:00.782) 0:00:30.894 ********** 2026-04-13 00:42:42.920835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:42:42.920840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:42:42.920852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:42:42.920857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:42:42.920862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:42:42.920871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:42:52.054885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:42:52.054982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:42:52.054992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-13 00:42:52.054999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:42:52.055006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:42:52.055013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:42:52.055020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:42:52.055028 | orchestrator | 2026-04-13 00:42:52.055036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055043 | orchestrator | Monday 13 April 2026 00:42:42 +0000 (0:00:00.422) 0:00:31.317 ********** 2026-04-13 00:42:52.055050 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055058 | orchestrator | 2026-04-13 00:42:52.055065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055071 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.290) 0:00:31.607 ********** 2026-04-13 00:42:52.055077 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055083 | orchestrator | 2026-04-13 00:42:52.055090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055096 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.241) 0:00:31.849 ********** 2026-04-13 00:42:52.055102 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055109 | orchestrator | 2026-04-13 00:42:52.055115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055121 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.219) 0:00:32.069 ********** 2026-04-13 00:42:52.055127 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055134 | orchestrator | 2026-04-13 00:42:52.055140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055146 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.263) 0:00:32.332 ********** 2026-04-13 00:42:52.055176 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055184 | orchestrator | 2026-04-13 00:42:52.055190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055196 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.196) 0:00:32.529 ********** 2026-04-13 00:42:52.055203 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055208 | orchestrator | 2026-04-13 00:42:52.055214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055220 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.222) 0:00:32.752 ********** 2026-04-13 00:42:52.055227 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055233 | orchestrator | 2026-04-13 00:42:52.055239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055246 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.196) 0:00:32.949 ********** 2026-04-13 00:42:52.055252 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055258 | orchestrator | 2026-04-13 00:42:52.055264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055271 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.213) 0:00:33.162 ********** 2026-04-13 00:42:52.055277 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a) 2026-04-13 00:42:52.055285 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a) 2026-04-13 00:42:52.055291 | orchestrator | 2026-04-13 00:42:52.055298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055304 | orchestrator | Monday 13 April 2026 00:42:45 +0000 (0:00:00.755) 0:00:33.918 ********** 2026-04-13 00:42:52.055327 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a) 2026-04-13 00:42:52.055335 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a) 2026-04-13 00:42:52.055341 | orchestrator | 2026-04-13 00:42:52.055347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055353 | orchestrator | Monday 13 April 2026 00:42:46 +0000 (0:00:00.940) 0:00:34.858 ********** 2026-04-13 00:42:52.055359 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7) 2026-04-13 00:42:52.055366 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7) 2026-04-13 00:42:52.055372 | orchestrator | 2026-04-13 00:42:52.055378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055384 | orchestrator | Monday 13 April 2026 00:42:47 +0000 (0:00:00.496) 0:00:35.355 ********** 2026-04-13 00:42:52.055390 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923) 2026-04-13 00:42:52.055396 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923) 2026-04-13 00:42:52.055402 | orchestrator | 2026-04-13 00:42:52.055408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:52.055414 | orchestrator | Monday 13 April 2026 00:42:47 +0000 (0:00:00.463) 0:00:35.819 ********** 2026-04-13 00:42:52.055421 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:42:52.055427 | orchestrator | 2026-04-13 00:42:52.055434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055458 | orchestrator | Monday 13 April 2026 00:42:47 +0000 (0:00:00.346) 0:00:36.165 ********** 2026-04-13 00:42:52.055465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:42:52.055472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:42:52.055479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:42:52.055486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:42:52.055499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:42:52.055506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:42:52.055514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:42:52.055520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:42:52.055527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-13 00:42:52.055534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:42:52.055540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:42:52.055547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:42:52.055553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:42:52.055559 | orchestrator | 2026-04-13 00:42:52.055566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055620 | orchestrator | Monday 13 April 2026 00:42:48 +0000 (0:00:00.380) 0:00:36.546 ********** 2026-04-13 00:42:52.055627 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055633 | orchestrator | 2026-04-13 00:42:52.055640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055646 | orchestrator | Monday 13 April 2026 00:42:48 +0000 (0:00:00.197) 0:00:36.744 ********** 2026-04-13 00:42:52.055652 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055659 | orchestrator | 2026-04-13 00:42:52.055666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055672 | orchestrator | Monday 13 April 2026 00:42:48 +0000 (0:00:00.209) 0:00:36.954 ********** 2026-04-13 00:42:52.055679 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055685 | orchestrator | 2026-04-13 00:42:52.055692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055698 | orchestrator | Monday 13 April 2026 00:42:48 +0000 (0:00:00.208) 0:00:37.162 ********** 2026-04-13 00:42:52.055705 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055711 | orchestrator | 2026-04-13 00:42:52.055718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055724 | orchestrator | Monday 13 April 2026 00:42:49 +0000 (0:00:00.243) 0:00:37.406 ********** 2026-04-13 00:42:52.055731 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055738 | orchestrator | 2026-04-13 00:42:52.055745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055751 | orchestrator | Monday 13 April 2026 00:42:49 +0000 (0:00:00.242) 0:00:37.649 ********** 2026-04-13 00:42:52.055758 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055764 | orchestrator | 2026-04-13 00:42:52.055770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055777 | orchestrator | Monday 13 April 2026 00:42:50 +0000 (0:00:00.786) 0:00:38.435 ********** 2026-04-13 00:42:52.055783 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055789 | orchestrator | 2026-04-13 00:42:52.055796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055802 | orchestrator | Monday 13 April 2026 00:42:50 +0000 (0:00:00.204) 0:00:38.640 ********** 2026-04-13 00:42:52.055808 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055813 | orchestrator | 2026-04-13 00:42:52.055819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055825 | orchestrator | Monday 13 April 2026 00:42:50 +0000 (0:00:00.218) 0:00:38.858 ********** 2026-04-13 00:42:52.055831 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-13 00:42:52.055845 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-13 00:42:52.055852 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-13 00:42:52.055858 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-13 00:42:52.055864 | orchestrator | 2026-04-13 00:42:52.055871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055878 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.714) 0:00:39.573 ********** 2026-04-13 00:42:52.055884 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055890 | orchestrator | 2026-04-13 00:42:52.055896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055903 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.213) 0:00:39.786 ********** 2026-04-13 00:42:52.055910 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055917 | orchestrator | 2026-04-13 00:42:52.055923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055929 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.187) 0:00:39.974 ********** 2026-04-13 00:42:52.055936 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055942 | orchestrator | 2026-04-13 00:42:52.055948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:52.055953 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.208) 0:00:40.183 ********** 2026-04-13 00:42:52.055960 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:52.055965 | orchestrator | 2026-04-13 00:42:52.055980 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-13 00:42:56.801375 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.184) 0:00:40.367 ********** 2026-04-13 00:42:56.801472 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-13 00:42:56.801484 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-13 00:42:56.801494 | orchestrator | 2026-04-13 00:42:56.801505 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-13 00:42:56.801514 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.180) 0:00:40.547 ********** 2026-04-13 00:42:56.801524 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.801533 | orchestrator | 2026-04-13 00:42:56.801542 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-13 00:42:56.801552 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.151) 0:00:40.699 ********** 2026-04-13 00:42:56.801628 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.801641 | orchestrator | 2026-04-13 00:42:56.801650 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-13 00:42:56.801659 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.150) 0:00:40.850 ********** 2026-04-13 00:42:56.801668 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.801677 | orchestrator | 2026-04-13 00:42:56.801687 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-13 00:42:56.801701 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.139) 0:00:40.990 ********** 2026-04-13 00:42:56.801717 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:56.801734 | orchestrator | 2026-04-13 00:42:56.801749 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-13 00:42:56.801766 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.398) 0:00:41.388 ********** 2026-04-13 00:42:56.801784 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ae95053f-cfae-50f3-8301-23c2132e6da4'}}) 2026-04-13 00:42:56.801803 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}}) 2026-04-13 00:42:56.801824 | orchestrator | 2026-04-13 00:42:56.801833 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-13 00:42:56.801842 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.169) 0:00:41.558 ********** 2026-04-13 00:42:56.801852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ae95053f-cfae-50f3-8301-23c2132e6da4'}})  2026-04-13 00:42:56.801885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}})  2026-04-13 00:42:56.801894 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.801904 | orchestrator | 2026-04-13 00:42:56.801914 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-13 00:42:56.801924 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.169) 0:00:41.728 ********** 2026-04-13 00:42:56.801934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ae95053f-cfae-50f3-8301-23c2132e6da4'}})  2026-04-13 00:42:56.801944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}})  2026-04-13 00:42:56.801954 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.801964 | orchestrator | 2026-04-13 00:42:56.801974 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-13 00:42:56.801985 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.175) 0:00:41.904 ********** 2026-04-13 00:42:56.801995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ae95053f-cfae-50f3-8301-23c2132e6da4'}})  2026-04-13 00:42:56.802004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}})  2026-04-13 00:42:56.802013 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.802083 | orchestrator | 2026-04-13 00:42:56.802099 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-13 00:42:56.802113 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.179) 0:00:42.084 ********** 2026-04-13 00:42:56.802127 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:56.802140 | orchestrator | 2026-04-13 00:42:56.802155 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-13 00:42:56.802170 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.182) 0:00:42.267 ********** 2026-04-13 00:42:56.802184 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:56.802198 | orchestrator | 2026-04-13 00:42:56.802212 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-13 00:42:56.802227 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.154) 0:00:42.421 ********** 2026-04-13 00:42:56.802243 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.802259 | orchestrator | 2026-04-13 00:42:56.802275 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-13 00:42:56.802289 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.144) 0:00:42.565 ********** 2026-04-13 00:42:56.802306 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.802322 | orchestrator | 2026-04-13 00:42:56.802339 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-13 00:42:56.802356 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.151) 0:00:42.717 ********** 2026-04-13 00:42:56.802371 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.802386 | orchestrator | 2026-04-13 00:42:56.802401 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-13 00:42:56.802415 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.176) 0:00:42.894 ********** 2026-04-13 00:42:56.802431 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:42:56.802447 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:56.802464 | orchestrator |  "sdb": { 2026-04-13 00:42:56.802501 | orchestrator |  "osd_lvm_uuid": "ae95053f-cfae-50f3-8301-23c2132e6da4" 2026-04-13 00:42:56.802512 | orchestrator |  }, 2026-04-13 00:42:56.802521 | orchestrator |  "sdc": { 2026-04-13 00:42:56.802530 | orchestrator |  "osd_lvm_uuid": "42f39a41-1a89-55d6-ba76-16e64e7a2b2d" 2026-04-13 00:42:56.802538 | orchestrator |  } 2026-04-13 00:42:56.802547 | orchestrator |  } 2026-04-13 00:42:56.802556 | orchestrator | } 2026-04-13 00:42:56.802651 | orchestrator | 2026-04-13 00:42:56.802676 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-13 00:42:56.802685 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.167) 0:00:43.062 ********** 2026-04-13 00:42:56.802695 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.802710 | orchestrator | 2026-04-13 00:42:56.802725 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-13 00:42:56.802739 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.175) 0:00:43.238 ********** 2026-04-13 00:42:56.802753 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.802767 | orchestrator | 2026-04-13 00:42:56.802780 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-13 00:42:56.802794 | orchestrator | Monday 13 April 2026 00:42:55 +0000 (0:00:00.398) 0:00:43.636 ********** 2026-04-13 00:42:56.802807 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:56.802819 | orchestrator | 2026-04-13 00:42:56.802832 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-13 00:42:56.802847 | orchestrator | Monday 13 April 2026 00:42:55 +0000 (0:00:00.173) 0:00:43.809 ********** 2026-04-13 00:42:56.802860 | orchestrator | changed: [testbed-node-5] => { 2026-04-13 00:42:56.802873 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-13 00:42:56.802887 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:56.802899 | orchestrator |  "sdb": { 2026-04-13 00:42:56.802913 | orchestrator |  "osd_lvm_uuid": "ae95053f-cfae-50f3-8301-23c2132e6da4" 2026-04-13 00:42:56.802928 | orchestrator |  }, 2026-04-13 00:42:56.802943 | orchestrator |  "sdc": { 2026-04-13 00:42:56.802958 | orchestrator |  "osd_lvm_uuid": "42f39a41-1a89-55d6-ba76-16e64e7a2b2d" 2026-04-13 00:42:56.802973 | orchestrator |  } 2026-04-13 00:42:56.802988 | orchestrator |  }, 2026-04-13 00:42:56.803005 | orchestrator |  "lvm_volumes": [ 2026-04-13 00:42:56.803014 | orchestrator |  { 2026-04-13 00:42:56.803023 | orchestrator |  "data": "osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4", 2026-04-13 00:42:56.803033 | orchestrator |  "data_vg": "ceph-ae95053f-cfae-50f3-8301-23c2132e6da4" 2026-04-13 00:42:56.803041 | orchestrator |  }, 2026-04-13 00:42:56.803055 | orchestrator |  { 2026-04-13 00:42:56.803064 | orchestrator |  "data": "osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d", 2026-04-13 00:42:56.803073 | orchestrator |  "data_vg": "ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d" 2026-04-13 00:42:56.803081 | orchestrator |  } 2026-04-13 00:42:56.803090 | orchestrator |  ] 2026-04-13 00:42:56.803099 | orchestrator |  } 2026-04-13 00:42:56.803108 | orchestrator | } 2026-04-13 00:42:56.803117 | orchestrator | 2026-04-13 00:42:56.803126 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-13 00:42:56.803135 | orchestrator | Monday 13 April 2026 00:42:55 +0000 (0:00:00.214) 0:00:44.024 ********** 2026-04-13 00:42:56.803144 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:56.803153 | orchestrator | 2026-04-13 00:42:56.803162 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:42:56.803171 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 00:42:56.803180 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 00:42:56.803188 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 00:42:56.803196 | orchestrator | 2026-04-13 00:42:56.803205 | orchestrator | 2026-04-13 00:42:56.803213 | orchestrator | 2026-04-13 00:42:56.803221 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:42:56.803229 | orchestrator | Monday 13 April 2026 00:42:56 +0000 (0:00:01.070) 0:00:45.095 ********** 2026-04-13 00:42:56.803246 | orchestrator | =============================================================================== 2026-04-13 00:42:56.803254 | orchestrator | Write configuration file ------------------------------------------------ 4.56s 2026-04-13 00:42:56.803262 | orchestrator | Get initial list of available block devices ----------------------------- 1.25s 2026-04-13 00:42:56.803279 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2026-04-13 00:42:56.803287 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-04-13 00:42:56.803296 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.07s 2026-04-13 00:42:56.803304 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-04-13 00:42:56.803312 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2026-04-13 00:42:56.803320 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-04-13 00:42:56.803328 | orchestrator | Set WAL devices config data --------------------------------------------- 0.82s 2026-04-13 00:42:56.803336 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-04-13 00:42:56.803344 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.80s 2026-04-13 00:42:56.803352 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-04-13 00:42:56.803360 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-04-13 00:42:56.803379 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.72s 2026-04-13 00:42:57.168390 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-13 00:42:57.168479 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.70s 2026-04-13 00:42:57.168489 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-13 00:42:57.168497 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-04-13 00:42:57.168505 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-04-13 00:42:57.168513 | orchestrator | Print DB devices -------------------------------------------------------- 0.67s 2026-04-13 00:43:19.060522 | orchestrator | 2026-04-13 00:43:19 | INFO  | Task 62233faf-37b8-4c00-b093-9e4b05bd499e (sync inventory) is running in background. Output coming soon. 2026-04-13 00:43:51.036444 | orchestrator | 2026-04-13 00:43:20 | INFO  | Starting group_vars file reorganization 2026-04-13 00:43:51.036638 | orchestrator | 2026-04-13 00:43:20 | INFO  | Moved 0 file(s) to their respective directories 2026-04-13 00:43:51.036658 | orchestrator | 2026-04-13 00:43:20 | INFO  | Group_vars file reorganization completed 2026-04-13 00:43:51.036669 | orchestrator | 2026-04-13 00:43:23 | INFO  | Starting variable preparation from inventory 2026-04-13 00:43:51.036679 | orchestrator | 2026-04-13 00:43:26 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-13 00:43:51.036689 | orchestrator | 2026-04-13 00:43:26 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-13 00:43:51.036716 | orchestrator | 2026-04-13 00:43:26 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-13 00:43:51.036726 | orchestrator | 2026-04-13 00:43:26 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-13 00:43:51.036735 | orchestrator | 2026-04-13 00:43:26 | INFO  | Variable preparation completed 2026-04-13 00:43:51.036744 | orchestrator | 2026-04-13 00:43:28 | INFO  | Starting inventory overwrite handling 2026-04-13 00:43:51.036754 | orchestrator | 2026-04-13 00:43:28 | INFO  | Handling group overwrites in 99-overwrite 2026-04-13 00:43:51.036763 | orchestrator | 2026-04-13 00:43:28 | INFO  | Removing group frr:children from 60-generic 2026-04-13 00:43:51.036795 | orchestrator | 2026-04-13 00:43:28 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-13 00:43:51.036805 | orchestrator | 2026-04-13 00:43:28 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-13 00:43:51.036814 | orchestrator | 2026-04-13 00:43:28 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-13 00:43:51.036823 | orchestrator | 2026-04-13 00:43:28 | INFO  | Handling group overwrites in 20-roles 2026-04-13 00:43:51.036832 | orchestrator | 2026-04-13 00:43:28 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-13 00:43:51.036842 | orchestrator | 2026-04-13 00:43:28 | INFO  | Removed 5 group(s) in total 2026-04-13 00:43:51.036851 | orchestrator | 2026-04-13 00:43:28 | INFO  | Inventory overwrite handling completed 2026-04-13 00:43:51.036860 | orchestrator | 2026-04-13 00:43:29 | INFO  | Starting merge of inventory files 2026-04-13 00:43:51.036869 | orchestrator | 2026-04-13 00:43:29 | INFO  | Inventory files merged successfully 2026-04-13 00:43:51.036878 | orchestrator | 2026-04-13 00:43:34 | INFO  | Generating minified hosts file 2026-04-13 00:43:51.036888 | orchestrator | 2026-04-13 00:43:35 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-13 00:43:51.036898 | orchestrator | 2026-04-13 00:43:35 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-13 00:43:51.036908 | orchestrator | 2026-04-13 00:43:37 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-13 00:43:51.036917 | orchestrator | 2026-04-13 00:43:49 | INFO  | Successfully wrote ClusterShell configuration 2026-04-13 00:43:51.036926 | orchestrator | [master aa59515] 2026-04-13-00-43 2026-04-13 00:43:51.036937 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-13 00:43:51.036947 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-13 00:43:51.036956 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-13 00:43:51.036965 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-13 00:43:52.512252 | orchestrator | 2026-04-13 00:43:52 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-13 00:43:52.601123 | orchestrator | 2026-04-13 00:43:52 | INFO  | Task a0b43d99-6fa3-41a1-8669-7d93a9ac2bef (ceph-create-lvm-devices) was prepared for execution. 2026-04-13 00:43:52.601226 | orchestrator | 2026-04-13 00:43:52 | INFO  | It takes a moment until task a0b43d99-6fa3-41a1-8669-7d93a9ac2bef (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-13 00:44:05.675168 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 00:44:05.675280 | orchestrator | 2.16.14 2026-04-13 00:44:05.675297 | orchestrator | 2026-04-13 00:44:05.675311 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-13 00:44:05.675333 | orchestrator | 2026-04-13 00:44:05.675394 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:44:05.675411 | orchestrator | Monday 13 April 2026 00:43:57 +0000 (0:00:00.295) 0:00:00.295 ********** 2026-04-13 00:44:05.675427 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 00:44:05.675443 | orchestrator | 2026-04-13 00:44:05.675459 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:44:05.675476 | orchestrator | Monday 13 April 2026 00:43:57 +0000 (0:00:00.267) 0:00:00.562 ********** 2026-04-13 00:44:05.675493 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:05.675510 | orchestrator | 2026-04-13 00:44:05.675585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.675603 | orchestrator | Monday 13 April 2026 00:43:57 +0000 (0:00:00.253) 0:00:00.815 ********** 2026-04-13 00:44:05.675649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:44:05.675662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:44:05.675672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:44:05.675682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:44:05.675692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:44:05.675703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:44:05.675715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:44:05.675726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:44:05.675738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-13 00:44:05.675749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:44:05.675760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:44:05.675772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:44:05.675783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:44:05.675794 | orchestrator | 2026-04-13 00:44:05.675806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.675817 | orchestrator | Monday 13 April 2026 00:43:58 +0000 (0:00:00.435) 0:00:01.251 ********** 2026-04-13 00:44:05.675829 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.675840 | orchestrator | 2026-04-13 00:44:05.675854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.675871 | orchestrator | Monday 13 April 2026 00:43:58 +0000 (0:00:00.624) 0:00:01.876 ********** 2026-04-13 00:44:05.675895 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.675914 | orchestrator | 2026-04-13 00:44:05.675930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.675946 | orchestrator | Monday 13 April 2026 00:43:58 +0000 (0:00:00.255) 0:00:02.131 ********** 2026-04-13 00:44:05.675983 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.675997 | orchestrator | 2026-04-13 00:44:05.676009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676020 | orchestrator | Monday 13 April 2026 00:43:59 +0000 (0:00:00.237) 0:00:02.369 ********** 2026-04-13 00:44:05.676030 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676040 | orchestrator | 2026-04-13 00:44:05.676050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676060 | orchestrator | Monday 13 April 2026 00:43:59 +0000 (0:00:00.259) 0:00:02.628 ********** 2026-04-13 00:44:05.676070 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676080 | orchestrator | 2026-04-13 00:44:05.676091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676100 | orchestrator | Monday 13 April 2026 00:43:59 +0000 (0:00:00.251) 0:00:02.880 ********** 2026-04-13 00:44:05.676110 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676120 | orchestrator | 2026-04-13 00:44:05.676130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676141 | orchestrator | Monday 13 April 2026 00:43:59 +0000 (0:00:00.213) 0:00:03.094 ********** 2026-04-13 00:44:05.676151 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676161 | orchestrator | 2026-04-13 00:44:05.676171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676181 | orchestrator | Monday 13 April 2026 00:44:00 +0000 (0:00:00.208) 0:00:03.302 ********** 2026-04-13 00:44:05.676191 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676212 | orchestrator | 2026-04-13 00:44:05.676222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676232 | orchestrator | Monday 13 April 2026 00:44:00 +0000 (0:00:00.181) 0:00:03.484 ********** 2026-04-13 00:44:05.676242 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3) 2026-04-13 00:44:05.676253 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3) 2026-04-13 00:44:05.676263 | orchestrator | 2026-04-13 00:44:05.676273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676303 | orchestrator | Monday 13 April 2026 00:44:00 +0000 (0:00:00.442) 0:00:03.927 ********** 2026-04-13 00:44:05.676314 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77) 2026-04-13 00:44:05.676324 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77) 2026-04-13 00:44:05.676333 | orchestrator | 2026-04-13 00:44:05.676343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676353 | orchestrator | Monday 13 April 2026 00:44:01 +0000 (0:00:00.449) 0:00:04.377 ********** 2026-04-13 00:44:05.676363 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f) 2026-04-13 00:44:05.676373 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f) 2026-04-13 00:44:05.676383 | orchestrator | 2026-04-13 00:44:05.676393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676403 | orchestrator | Monday 13 April 2026 00:44:01 +0000 (0:00:00.686) 0:00:05.063 ********** 2026-04-13 00:44:05.676413 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05) 2026-04-13 00:44:05.676423 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05) 2026-04-13 00:44:05.676433 | orchestrator | 2026-04-13 00:44:05.676443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:05.676453 | orchestrator | Monday 13 April 2026 00:44:02 +0000 (0:00:00.837) 0:00:05.901 ********** 2026-04-13 00:44:05.676463 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:44:05.676473 | orchestrator | 2026-04-13 00:44:05.676483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.676499 | orchestrator | Monday 13 April 2026 00:44:03 +0000 (0:00:00.913) 0:00:06.814 ********** 2026-04-13 00:44:05.676509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:44:05.676550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:44:05.676568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:44:05.676586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:44:05.676602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:44:05.676620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:44:05.676631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:44:05.676641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:44:05.676651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-13 00:44:05.676661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:44:05.676670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:44:05.676680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:44:05.676698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:44:05.676708 | orchestrator | 2026-04-13 00:44:05.676718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.676728 | orchestrator | Monday 13 April 2026 00:44:04 +0000 (0:00:00.515) 0:00:07.330 ********** 2026-04-13 00:44:05.676738 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676747 | orchestrator | 2026-04-13 00:44:05.676757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.676767 | orchestrator | Monday 13 April 2026 00:44:04 +0000 (0:00:00.217) 0:00:07.547 ********** 2026-04-13 00:44:05.676777 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676787 | orchestrator | 2026-04-13 00:44:05.676797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.676807 | orchestrator | Monday 13 April 2026 00:44:04 +0000 (0:00:00.225) 0:00:07.773 ********** 2026-04-13 00:44:05.676817 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676827 | orchestrator | 2026-04-13 00:44:05.676837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.676847 | orchestrator | Monday 13 April 2026 00:44:04 +0000 (0:00:00.202) 0:00:07.975 ********** 2026-04-13 00:44:05.676857 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676867 | orchestrator | 2026-04-13 00:44:05.676877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.676887 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.234) 0:00:08.209 ********** 2026-04-13 00:44:05.676897 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676907 | orchestrator | 2026-04-13 00:44:05.676917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.676927 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.196) 0:00:08.406 ********** 2026-04-13 00:44:05.676937 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.676947 | orchestrator | 2026-04-13 00:44:05.677029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:05.677040 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.200) 0:00:08.606 ********** 2026-04-13 00:44:05.677051 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:05.677061 | orchestrator | 2026-04-13 00:44:05.677079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:14.418949 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.202) 0:00:08.809 ********** 2026-04-13 00:44:14.419075 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.419102 | orchestrator | 2026-04-13 00:44:14.419122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:14.419140 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.204) 0:00:09.014 ********** 2026-04-13 00:44:14.419156 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-13 00:44:14.419175 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-13 00:44:14.419194 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-13 00:44:14.419214 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-13 00:44:14.419232 | orchestrator | 2026-04-13 00:44:14.419252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:14.419271 | orchestrator | Monday 13 April 2026 00:44:07 +0000 (0:00:01.178) 0:00:10.193 ********** 2026-04-13 00:44:14.419290 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.419309 | orchestrator | 2026-04-13 00:44:14.419328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:14.419347 | orchestrator | Monday 13 April 2026 00:44:07 +0000 (0:00:00.198) 0:00:10.391 ********** 2026-04-13 00:44:14.419365 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.419382 | orchestrator | 2026-04-13 00:44:14.419400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:14.419448 | orchestrator | Monday 13 April 2026 00:44:07 +0000 (0:00:00.258) 0:00:10.649 ********** 2026-04-13 00:44:14.419467 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.419484 | orchestrator | 2026-04-13 00:44:14.419504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:14.419557 | orchestrator | Monday 13 April 2026 00:44:07 +0000 (0:00:00.240) 0:00:10.890 ********** 2026-04-13 00:44:14.419577 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.419598 | orchestrator | 2026-04-13 00:44:14.419617 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-13 00:44:14.419636 | orchestrator | Monday 13 April 2026 00:44:07 +0000 (0:00:00.199) 0:00:11.089 ********** 2026-04-13 00:44:14.419648 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.419660 | orchestrator | 2026-04-13 00:44:14.419673 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-13 00:44:14.419686 | orchestrator | Monday 13 April 2026 00:44:08 +0000 (0:00:00.175) 0:00:11.265 ********** 2026-04-13 00:44:14.419699 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '273f60d0-eab1-5837-bb33-0c04c9e5b829'}}) 2026-04-13 00:44:14.419712 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f99b2314-ad51-5797-a71e-17207c9800e6'}}) 2026-04-13 00:44:14.419724 | orchestrator | 2026-04-13 00:44:14.419737 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-13 00:44:14.419750 | orchestrator | Monday 13 April 2026 00:44:08 +0000 (0:00:00.223) 0:00:11.488 ********** 2026-04-13 00:44:14.419763 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'}) 2026-04-13 00:44:14.419777 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'}) 2026-04-13 00:44:14.419790 | orchestrator | 2026-04-13 00:44:14.419804 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-13 00:44:14.419815 | orchestrator | Monday 13 April 2026 00:44:10 +0000 (0:00:02.154) 0:00:13.643 ********** 2026-04-13 00:44:14.419826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.419858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.419869 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.419881 | orchestrator | 2026-04-13 00:44:14.419892 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-13 00:44:14.419903 | orchestrator | Monday 13 April 2026 00:44:10 +0000 (0:00:00.186) 0:00:13.830 ********** 2026-04-13 00:44:14.419914 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'}) 2026-04-13 00:44:14.419925 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'}) 2026-04-13 00:44:14.419937 | orchestrator | 2026-04-13 00:44:14.419948 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-13 00:44:14.419959 | orchestrator | Monday 13 April 2026 00:44:12 +0000 (0:00:01.592) 0:00:15.422 ********** 2026-04-13 00:44:14.419970 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.419981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.419992 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420003 | orchestrator | 2026-04-13 00:44:14.420015 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-13 00:44:14.420037 | orchestrator | Monday 13 April 2026 00:44:12 +0000 (0:00:00.181) 0:00:15.604 ********** 2026-04-13 00:44:14.420070 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420082 | orchestrator | 2026-04-13 00:44:14.420100 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-13 00:44:14.420125 | orchestrator | Monday 13 April 2026 00:44:12 +0000 (0:00:00.149) 0:00:15.753 ********** 2026-04-13 00:44:14.420150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.420204 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.420222 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420239 | orchestrator | 2026-04-13 00:44:14.420258 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-13 00:44:14.420278 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.389) 0:00:16.143 ********** 2026-04-13 00:44:14.420295 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420313 | orchestrator | 2026-04-13 00:44:14.420332 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-13 00:44:14.420351 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.136) 0:00:16.280 ********** 2026-04-13 00:44:14.420369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.420388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.420405 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420416 | orchestrator | 2026-04-13 00:44:14.420435 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-13 00:44:14.420447 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.164) 0:00:16.444 ********** 2026-04-13 00:44:14.420458 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420469 | orchestrator | 2026-04-13 00:44:14.420480 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-13 00:44:14.420491 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.134) 0:00:16.578 ********** 2026-04-13 00:44:14.420502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.420537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.420548 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420559 | orchestrator | 2026-04-13 00:44:14.420570 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-13 00:44:14.420582 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.187) 0:00:16.766 ********** 2026-04-13 00:44:14.420593 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:14.420605 | orchestrator | 2026-04-13 00:44:14.420616 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-13 00:44:14.420627 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.162) 0:00:16.928 ********** 2026-04-13 00:44:14.420639 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.420650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.420661 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420672 | orchestrator | 2026-04-13 00:44:14.420684 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-13 00:44:14.420705 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.159) 0:00:17.087 ********** 2026-04-13 00:44:14.420716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.420728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.420739 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420750 | orchestrator | 2026-04-13 00:44:14.420761 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-13 00:44:14.420773 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.172) 0:00:17.259 ********** 2026-04-13 00:44:14.420784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:14.420795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:14.420806 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420818 | orchestrator | 2026-04-13 00:44:14.420829 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-13 00:44:14.420840 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.150) 0:00:17.410 ********** 2026-04-13 00:44:14.420852 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:14.420863 | orchestrator | 2026-04-13 00:44:14.420874 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-13 00:44:14.420896 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.148) 0:00:17.559 ********** 2026-04-13 00:44:21.123976 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.124090 | orchestrator | 2026-04-13 00:44:21.124107 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-13 00:44:21.124121 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.147) 0:00:17.706 ********** 2026-04-13 00:44:21.124134 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.124146 | orchestrator | 2026-04-13 00:44:21.124158 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-13 00:44:21.124213 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.128) 0:00:17.835 ********** 2026-04-13 00:44:21.124227 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:21.124240 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-13 00:44:21.124251 | orchestrator | } 2026-04-13 00:44:21.124263 | orchestrator | 2026-04-13 00:44:21.124274 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-13 00:44:21.124286 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.386) 0:00:18.222 ********** 2026-04-13 00:44:21.124297 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:21.124308 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-13 00:44:21.124319 | orchestrator | } 2026-04-13 00:44:21.124330 | orchestrator | 2026-04-13 00:44:21.124341 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-13 00:44:21.124353 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.164) 0:00:18.387 ********** 2026-04-13 00:44:21.124363 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:21.124375 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-13 00:44:21.124386 | orchestrator | } 2026-04-13 00:44:21.124397 | orchestrator | 2026-04-13 00:44:21.124408 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-13 00:44:21.124419 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.172) 0:00:18.559 ********** 2026-04-13 00:44:21.124430 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:21.124441 | orchestrator | 2026-04-13 00:44:21.124453 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-13 00:44:21.124494 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.710) 0:00:19.270 ********** 2026-04-13 00:44:21.124581 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:21.124595 | orchestrator | 2026-04-13 00:44:21.124608 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-13 00:44:21.124621 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.533) 0:00:19.804 ********** 2026-04-13 00:44:21.124634 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:21.124646 | orchestrator | 2026-04-13 00:44:21.124659 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-13 00:44:21.124672 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.575) 0:00:20.380 ********** 2026-04-13 00:44:21.124721 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:21.124736 | orchestrator | 2026-04-13 00:44:21.124749 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-13 00:44:21.124762 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.160) 0:00:20.541 ********** 2026-04-13 00:44:21.124774 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.124814 | orchestrator | 2026-04-13 00:44:21.124826 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-13 00:44:21.124837 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.115) 0:00:20.656 ********** 2026-04-13 00:44:21.124848 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.124860 | orchestrator | 2026-04-13 00:44:21.124871 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-13 00:44:21.124882 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.131) 0:00:20.788 ********** 2026-04-13 00:44:21.124893 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:21.124904 | orchestrator |  "vgs_report": { 2026-04-13 00:44:21.124943 | orchestrator |  "vg": [] 2026-04-13 00:44:21.124955 | orchestrator |  } 2026-04-13 00:44:21.124966 | orchestrator | } 2026-04-13 00:44:21.124977 | orchestrator | 2026-04-13 00:44:21.124989 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-13 00:44:21.125000 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.155) 0:00:20.943 ********** 2026-04-13 00:44:21.125011 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125022 | orchestrator | 2026-04-13 00:44:21.125034 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-13 00:44:21.125045 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.129) 0:00:21.072 ********** 2026-04-13 00:44:21.125056 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125074 | orchestrator | 2026-04-13 00:44:21.125094 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-13 00:44:21.125114 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.139) 0:00:21.212 ********** 2026-04-13 00:44:21.125134 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125154 | orchestrator | 2026-04-13 00:44:21.125175 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-13 00:44:21.125189 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.357) 0:00:21.570 ********** 2026-04-13 00:44:21.125201 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125211 | orchestrator | 2026-04-13 00:44:21.125223 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-13 00:44:21.125234 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.142) 0:00:21.712 ********** 2026-04-13 00:44:21.125245 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125256 | orchestrator | 2026-04-13 00:44:21.125267 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-13 00:44:21.125278 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.139) 0:00:21.852 ********** 2026-04-13 00:44:21.125289 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125300 | orchestrator | 2026-04-13 00:44:21.125311 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-13 00:44:21.125322 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.134) 0:00:21.987 ********** 2026-04-13 00:44:21.125333 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125355 | orchestrator | 2026-04-13 00:44:21.125366 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-13 00:44:21.125378 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.189) 0:00:22.177 ********** 2026-04-13 00:44:21.125410 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125422 | orchestrator | 2026-04-13 00:44:21.125453 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-13 00:44:21.125465 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.135) 0:00:22.312 ********** 2026-04-13 00:44:21.125476 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125487 | orchestrator | 2026-04-13 00:44:21.125498 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-13 00:44:21.125534 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.157) 0:00:22.470 ********** 2026-04-13 00:44:21.125546 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125557 | orchestrator | 2026-04-13 00:44:21.125569 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-13 00:44:21.125580 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.152) 0:00:22.622 ********** 2026-04-13 00:44:21.125591 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125602 | orchestrator | 2026-04-13 00:44:21.125613 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-13 00:44:21.125624 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.139) 0:00:22.762 ********** 2026-04-13 00:44:21.125635 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125646 | orchestrator | 2026-04-13 00:44:21.125657 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-13 00:44:21.125668 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.137) 0:00:22.899 ********** 2026-04-13 00:44:21.125679 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125690 | orchestrator | 2026-04-13 00:44:21.125701 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-13 00:44:21.125712 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.139) 0:00:23.039 ********** 2026-04-13 00:44:21.125723 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125734 | orchestrator | 2026-04-13 00:44:21.125750 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-13 00:44:21.125762 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.122) 0:00:23.162 ********** 2026-04-13 00:44:21.125774 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:21.125787 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:21.125798 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125809 | orchestrator | 2026-04-13 00:44:21.125820 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-13 00:44:21.125831 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.175) 0:00:23.338 ********** 2026-04-13 00:44:21.125842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:21.125854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:21.125865 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125876 | orchestrator | 2026-04-13 00:44:21.125887 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-13 00:44:21.125898 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.388) 0:00:23.726 ********** 2026-04-13 00:44:21.125909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:21.125920 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:21.125939 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.125950 | orchestrator | 2026-04-13 00:44:21.125961 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-13 00:44:21.126012 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.156) 0:00:23.882 ********** 2026-04-13 00:44:21.126098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:21.126110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:21.126121 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.126133 | orchestrator | 2026-04-13 00:44:21.126144 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-13 00:44:21.126155 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.144) 0:00:24.027 ********** 2026-04-13 00:44:21.126166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:21.126178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:21.126189 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:21.126200 | orchestrator | 2026-04-13 00:44:21.126211 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-13 00:44:21.126222 | orchestrator | Monday 13 April 2026 00:44:21 +0000 (0:00:00.174) 0:00:24.202 ********** 2026-04-13 00:44:21.126243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:26.650824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:26.650936 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:26.650955 | orchestrator | 2026-04-13 00:44:26.650969 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-13 00:44:26.650982 | orchestrator | Monday 13 April 2026 00:44:21 +0000 (0:00:00.157) 0:00:24.360 ********** 2026-04-13 00:44:26.650993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:26.651005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:26.651017 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:26.651028 | orchestrator | 2026-04-13 00:44:26.651040 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-13 00:44:26.651051 | orchestrator | Monday 13 April 2026 00:44:21 +0000 (0:00:00.167) 0:00:24.528 ********** 2026-04-13 00:44:26.651063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:26.651092 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:26.651111 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:26.651123 | orchestrator | 2026-04-13 00:44:26.651135 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-13 00:44:26.651147 | orchestrator | Monday 13 April 2026 00:44:21 +0000 (0:00:00.143) 0:00:24.671 ********** 2026-04-13 00:44:26.651158 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:26.651171 | orchestrator | 2026-04-13 00:44:26.651206 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-13 00:44:26.651218 | orchestrator | Monday 13 April 2026 00:44:22 +0000 (0:00:00.629) 0:00:25.301 ********** 2026-04-13 00:44:26.651230 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:26.651241 | orchestrator | 2026-04-13 00:44:26.651253 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-13 00:44:26.651264 | orchestrator | Monday 13 April 2026 00:44:22 +0000 (0:00:00.523) 0:00:25.825 ********** 2026-04-13 00:44:26.651275 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:26.651287 | orchestrator | 2026-04-13 00:44:26.651306 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-13 00:44:26.651325 | orchestrator | Monday 13 April 2026 00:44:22 +0000 (0:00:00.143) 0:00:25.968 ********** 2026-04-13 00:44:26.651344 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'vg_name': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'}) 2026-04-13 00:44:26.651366 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'vg_name': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'}) 2026-04-13 00:44:26.651386 | orchestrator | 2026-04-13 00:44:26.651407 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-13 00:44:26.651427 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.178) 0:00:26.147 ********** 2026-04-13 00:44:26.651448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:26.651466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:26.651480 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:26.651492 | orchestrator | 2026-04-13 00:44:26.651542 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-13 00:44:26.651557 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.145) 0:00:26.293 ********** 2026-04-13 00:44:26.651570 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:26.651584 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:26.651596 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:26.651609 | orchestrator | 2026-04-13 00:44:26.651622 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-13 00:44:26.651634 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.379) 0:00:26.673 ********** 2026-04-13 00:44:26.651647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'})  2026-04-13 00:44:26.651660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'})  2026-04-13 00:44:26.651672 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:26.651685 | orchestrator | 2026-04-13 00:44:26.651697 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-13 00:44:26.651710 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.172) 0:00:26.845 ********** 2026-04-13 00:44:26.651741 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:26.651753 | orchestrator |  "lvm_report": { 2026-04-13 00:44:26.651765 | orchestrator |  "lv": [ 2026-04-13 00:44:26.651776 | orchestrator |  { 2026-04-13 00:44:26.651787 | orchestrator |  "lv_name": "osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829", 2026-04-13 00:44:26.651799 | orchestrator |  "vg_name": "ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829" 2026-04-13 00:44:26.651810 | orchestrator |  }, 2026-04-13 00:44:26.651833 | orchestrator |  { 2026-04-13 00:44:26.651845 | orchestrator |  "lv_name": "osd-block-f99b2314-ad51-5797-a71e-17207c9800e6", 2026-04-13 00:44:26.651856 | orchestrator |  "vg_name": "ceph-f99b2314-ad51-5797-a71e-17207c9800e6" 2026-04-13 00:44:26.651867 | orchestrator |  } 2026-04-13 00:44:26.651879 | orchestrator |  ], 2026-04-13 00:44:26.651890 | orchestrator |  "pv": [ 2026-04-13 00:44:26.651908 | orchestrator |  { 2026-04-13 00:44:26.651920 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-13 00:44:26.651931 | orchestrator |  "vg_name": "ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829" 2026-04-13 00:44:26.651942 | orchestrator |  }, 2026-04-13 00:44:26.651953 | orchestrator |  { 2026-04-13 00:44:26.651967 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-13 00:44:26.651987 | orchestrator |  "vg_name": "ceph-f99b2314-ad51-5797-a71e-17207c9800e6" 2026-04-13 00:44:26.652006 | orchestrator |  } 2026-04-13 00:44:26.652025 | orchestrator |  ] 2026-04-13 00:44:26.652045 | orchestrator |  } 2026-04-13 00:44:26.652065 | orchestrator | } 2026-04-13 00:44:26.652083 | orchestrator | 2026-04-13 00:44:26.652103 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-13 00:44:26.652123 | orchestrator | 2026-04-13 00:44:26.652143 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:44:26.652164 | orchestrator | Monday 13 April 2026 00:44:24 +0000 (0:00:00.329) 0:00:27.175 ********** 2026-04-13 00:44:26.652183 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-13 00:44:26.652202 | orchestrator | 2026-04-13 00:44:26.652221 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:44:26.652240 | orchestrator | Monday 13 April 2026 00:44:24 +0000 (0:00:00.249) 0:00:27.425 ********** 2026-04-13 00:44:26.652259 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:26.652277 | orchestrator | 2026-04-13 00:44:26.652295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:26.652314 | orchestrator | Monday 13 April 2026 00:44:24 +0000 (0:00:00.235) 0:00:27.660 ********** 2026-04-13 00:44:26.652334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:44:26.652353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:44:26.652365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:44:26.652379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:44:26.652398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:44:26.652417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:44:26.652437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:44:26.652455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:44:26.652473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-13 00:44:26.652496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:44:26.652544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:44:26.652557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:44:26.652568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:44:26.652580 | orchestrator | 2026-04-13 00:44:26.652592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:26.652611 | orchestrator | Monday 13 April 2026 00:44:24 +0000 (0:00:00.427) 0:00:28.087 ********** 2026-04-13 00:44:26.652627 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:26.652657 | orchestrator | 2026-04-13 00:44:26.652677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:26.652697 | orchestrator | Monday 13 April 2026 00:44:25 +0000 (0:00:00.200) 0:00:28.287 ********** 2026-04-13 00:44:26.652717 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:26.652735 | orchestrator | 2026-04-13 00:44:26.652750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:26.652761 | orchestrator | Monday 13 April 2026 00:44:25 +0000 (0:00:00.203) 0:00:28.492 ********** 2026-04-13 00:44:26.652772 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:26.652785 | orchestrator | 2026-04-13 00:44:26.652804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:26.652824 | orchestrator | Monday 13 April 2026 00:44:25 +0000 (0:00:00.196) 0:00:28.688 ********** 2026-04-13 00:44:26.652844 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:26.652865 | orchestrator | 2026-04-13 00:44:26.652884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:26.652904 | orchestrator | Monday 13 April 2026 00:44:26 +0000 (0:00:00.687) 0:00:29.376 ********** 2026-04-13 00:44:26.652917 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:26.652929 | orchestrator | 2026-04-13 00:44:26.652940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:26.652951 | orchestrator | Monday 13 April 2026 00:44:26 +0000 (0:00:00.205) 0:00:29.582 ********** 2026-04-13 00:44:26.652962 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:26.652974 | orchestrator | 2026-04-13 00:44:26.652997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:37.369294 | orchestrator | Monday 13 April 2026 00:44:26 +0000 (0:00:00.208) 0:00:29.790 ********** 2026-04-13 00:44:37.369388 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.369398 | orchestrator | 2026-04-13 00:44:37.369407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:37.369415 | orchestrator | Monday 13 April 2026 00:44:26 +0000 (0:00:00.216) 0:00:30.007 ********** 2026-04-13 00:44:37.369421 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.369428 | orchestrator | 2026-04-13 00:44:37.369434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:37.369441 | orchestrator | Monday 13 April 2026 00:44:27 +0000 (0:00:00.209) 0:00:30.216 ********** 2026-04-13 00:44:37.369461 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7) 2026-04-13 00:44:37.369474 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7) 2026-04-13 00:44:37.369485 | orchestrator | 2026-04-13 00:44:37.369532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:37.369543 | orchestrator | Monday 13 April 2026 00:44:27 +0000 (0:00:00.484) 0:00:30.700 ********** 2026-04-13 00:44:37.369553 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605) 2026-04-13 00:44:37.369563 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605) 2026-04-13 00:44:37.369574 | orchestrator | 2026-04-13 00:44:37.369599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:37.369611 | orchestrator | Monday 13 April 2026 00:44:28 +0000 (0:00:00.458) 0:00:31.159 ********** 2026-04-13 00:44:37.369621 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194) 2026-04-13 00:44:37.369631 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194) 2026-04-13 00:44:37.369642 | orchestrator | 2026-04-13 00:44:37.369653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:37.369663 | orchestrator | Monday 13 April 2026 00:44:28 +0000 (0:00:00.451) 0:00:31.610 ********** 2026-04-13 00:44:37.369673 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e) 2026-04-13 00:44:37.369707 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e) 2026-04-13 00:44:37.369718 | orchestrator | 2026-04-13 00:44:37.369729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:37.369740 | orchestrator | Monday 13 April 2026 00:44:28 +0000 (0:00:00.441) 0:00:32.052 ********** 2026-04-13 00:44:37.369750 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:44:37.369761 | orchestrator | 2026-04-13 00:44:37.369771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.369783 | orchestrator | Monday 13 April 2026 00:44:29 +0000 (0:00:00.342) 0:00:32.394 ********** 2026-04-13 00:44:37.369795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:44:37.369806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:44:37.369816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:44:37.369828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:44:37.369840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:44:37.369852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:44:37.369863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:44:37.369875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:44:37.369886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-13 00:44:37.369896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:44:37.369907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:44:37.369918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:44:37.369929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:44:37.369940 | orchestrator | 2026-04-13 00:44:37.369951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.369959 | orchestrator | Monday 13 April 2026 00:44:29 +0000 (0:00:00.631) 0:00:33.025 ********** 2026-04-13 00:44:37.369967 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.369974 | orchestrator | 2026-04-13 00:44:37.369982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.369989 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.226) 0:00:33.252 ********** 2026-04-13 00:44:37.369996 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370003 | orchestrator | 2026-04-13 00:44:37.370011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370067 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.204) 0:00:33.457 ********** 2026-04-13 00:44:37.370076 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370083 | orchestrator | 2026-04-13 00:44:37.370108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370116 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.235) 0:00:33.692 ********** 2026-04-13 00:44:37.370123 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370131 | orchestrator | 2026-04-13 00:44:37.370138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370149 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.211) 0:00:33.903 ********** 2026-04-13 00:44:37.370160 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370171 | orchestrator | 2026-04-13 00:44:37.370182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370208 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.191) 0:00:34.095 ********** 2026-04-13 00:44:37.370219 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370231 | orchestrator | 2026-04-13 00:44:37.370242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370254 | orchestrator | Monday 13 April 2026 00:44:31 +0000 (0:00:00.230) 0:00:34.326 ********** 2026-04-13 00:44:37.370265 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370276 | orchestrator | 2026-04-13 00:44:37.370291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370305 | orchestrator | Monday 13 April 2026 00:44:31 +0000 (0:00:00.201) 0:00:34.528 ********** 2026-04-13 00:44:37.370317 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370329 | orchestrator | 2026-04-13 00:44:37.370341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370361 | orchestrator | Monday 13 April 2026 00:44:31 +0000 (0:00:00.205) 0:00:34.733 ********** 2026-04-13 00:44:37.370373 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-13 00:44:37.370386 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-13 00:44:37.370400 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-13 00:44:37.370413 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-13 00:44:37.370423 | orchestrator | 2026-04-13 00:44:37.370435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370448 | orchestrator | Monday 13 April 2026 00:44:32 +0000 (0:00:00.856) 0:00:35.590 ********** 2026-04-13 00:44:37.370460 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370472 | orchestrator | 2026-04-13 00:44:37.370485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370517 | orchestrator | Monday 13 April 2026 00:44:32 +0000 (0:00:00.190) 0:00:35.780 ********** 2026-04-13 00:44:37.370528 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370539 | orchestrator | 2026-04-13 00:44:37.370549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370559 | orchestrator | Monday 13 April 2026 00:44:32 +0000 (0:00:00.193) 0:00:35.974 ********** 2026-04-13 00:44:37.370570 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370581 | orchestrator | 2026-04-13 00:44:37.370591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:37.370601 | orchestrator | Monday 13 April 2026 00:44:33 +0000 (0:00:00.705) 0:00:36.680 ********** 2026-04-13 00:44:37.370612 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370623 | orchestrator | 2026-04-13 00:44:37.370633 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-13 00:44:37.370642 | orchestrator | Monday 13 April 2026 00:44:33 +0000 (0:00:00.222) 0:00:36.902 ********** 2026-04-13 00:44:37.370648 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370654 | orchestrator | 2026-04-13 00:44:37.370661 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-13 00:44:37.370667 | orchestrator | Monday 13 April 2026 00:44:33 +0000 (0:00:00.151) 0:00:37.054 ********** 2026-04-13 00:44:37.370674 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '976187fe-8802-504d-92cd-339995e22605'}}) 2026-04-13 00:44:37.370681 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204a2e69-8032-57e4-80e8-bdb37f98e657'}}) 2026-04-13 00:44:37.370687 | orchestrator | 2026-04-13 00:44:37.370694 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-13 00:44:37.370700 | orchestrator | Monday 13 April 2026 00:44:34 +0000 (0:00:00.203) 0:00:37.258 ********** 2026-04-13 00:44:37.370707 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'}) 2026-04-13 00:44:37.370715 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'}) 2026-04-13 00:44:37.370730 | orchestrator | 2026-04-13 00:44:37.370736 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-13 00:44:37.370742 | orchestrator | Monday 13 April 2026 00:44:36 +0000 (0:00:01.889) 0:00:39.147 ********** 2026-04-13 00:44:37.370749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:37.370757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:37.370763 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:37.370770 | orchestrator | 2026-04-13 00:44:37.370776 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-13 00:44:37.370783 | orchestrator | Monday 13 April 2026 00:44:36 +0000 (0:00:00.168) 0:00:39.315 ********** 2026-04-13 00:44:37.370789 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'}) 2026-04-13 00:44:37.370805 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'}) 2026-04-13 00:44:43.246352 | orchestrator | 2026-04-13 00:44:43.246548 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-13 00:44:43.246579 | orchestrator | Monday 13 April 2026 00:44:37 +0000 (0:00:01.267) 0:00:40.582 ********** 2026-04-13 00:44:43.246596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:43.246610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:43.246621 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.246633 | orchestrator | 2026-04-13 00:44:43.246645 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-13 00:44:43.246657 | orchestrator | Monday 13 April 2026 00:44:37 +0000 (0:00:00.146) 0:00:40.729 ********** 2026-04-13 00:44:43.246668 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.246680 | orchestrator | 2026-04-13 00:44:43.246691 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-13 00:44:43.246702 | orchestrator | Monday 13 April 2026 00:44:37 +0000 (0:00:00.148) 0:00:40.878 ********** 2026-04-13 00:44:43.246714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:43.246726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:43.246737 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.246749 | orchestrator | 2026-04-13 00:44:43.246760 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-13 00:44:43.246772 | orchestrator | Monday 13 April 2026 00:44:37 +0000 (0:00:00.162) 0:00:41.040 ********** 2026-04-13 00:44:43.246783 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.246794 | orchestrator | 2026-04-13 00:44:43.246805 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-13 00:44:43.246816 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.133) 0:00:41.174 ********** 2026-04-13 00:44:43.246828 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:43.246839 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:43.246873 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.246887 | orchestrator | 2026-04-13 00:44:43.246900 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-13 00:44:43.246913 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.152) 0:00:41.327 ********** 2026-04-13 00:44:43.246925 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.246938 | orchestrator | 2026-04-13 00:44:43.246969 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-13 00:44:43.246982 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.362) 0:00:41.689 ********** 2026-04-13 00:44:43.246995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:43.247008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:43.247021 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247033 | orchestrator | 2026-04-13 00:44:43.247046 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-13 00:44:43.247059 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.149) 0:00:41.839 ********** 2026-04-13 00:44:43.247072 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:43.247085 | orchestrator | 2026-04-13 00:44:43.247097 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-13 00:44:43.247110 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.204) 0:00:42.044 ********** 2026-04-13 00:44:43.247124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:43.247137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:43.247148 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247159 | orchestrator | 2026-04-13 00:44:43.247171 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-13 00:44:43.247182 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.212) 0:00:42.256 ********** 2026-04-13 00:44:43.247193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:43.247205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:43.247216 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247227 | orchestrator | 2026-04-13 00:44:43.247238 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-13 00:44:43.247268 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.179) 0:00:42.435 ********** 2026-04-13 00:44:43.247280 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:43.247292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:43.247303 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247314 | orchestrator | 2026-04-13 00:44:43.247325 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-13 00:44:43.247337 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.161) 0:00:42.597 ********** 2026-04-13 00:44:43.247348 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247359 | orchestrator | 2026-04-13 00:44:43.247370 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-13 00:44:43.247381 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.166) 0:00:42.763 ********** 2026-04-13 00:44:43.247402 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247414 | orchestrator | 2026-04-13 00:44:43.247425 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-13 00:44:43.247441 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.152) 0:00:42.915 ********** 2026-04-13 00:44:43.247453 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247464 | orchestrator | 2026-04-13 00:44:43.247475 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-13 00:44:43.247543 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.136) 0:00:43.051 ********** 2026-04-13 00:44:43.247559 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:43.247571 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-13 00:44:43.247582 | orchestrator | } 2026-04-13 00:44:43.247594 | orchestrator | 2026-04-13 00:44:43.247605 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-13 00:44:43.247616 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.135) 0:00:43.187 ********** 2026-04-13 00:44:43.247627 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:43.247638 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-13 00:44:43.247649 | orchestrator | } 2026-04-13 00:44:43.247660 | orchestrator | 2026-04-13 00:44:43.247672 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-13 00:44:43.247683 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.146) 0:00:43.334 ********** 2026-04-13 00:44:43.247694 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:43.247705 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-13 00:44:43.247716 | orchestrator | } 2026-04-13 00:44:43.247727 | orchestrator | 2026-04-13 00:44:43.247738 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-13 00:44:43.247749 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.154) 0:00:43.489 ********** 2026-04-13 00:44:43.247760 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:43.247772 | orchestrator | 2026-04-13 00:44:43.247783 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-13 00:44:43.247794 | orchestrator | Monday 13 April 2026 00:44:41 +0000 (0:00:00.770) 0:00:44.259 ********** 2026-04-13 00:44:43.247805 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:43.247816 | orchestrator | 2026-04-13 00:44:43.247827 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-13 00:44:43.247838 | orchestrator | Monday 13 April 2026 00:44:41 +0000 (0:00:00.490) 0:00:44.750 ********** 2026-04-13 00:44:43.247849 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:43.247860 | orchestrator | 2026-04-13 00:44:43.247872 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-13 00:44:43.247883 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.502) 0:00:45.252 ********** 2026-04-13 00:44:43.247894 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:43.247905 | orchestrator | 2026-04-13 00:44:43.247916 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-13 00:44:43.247927 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.176) 0:00:45.429 ********** 2026-04-13 00:44:43.247938 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247949 | orchestrator | 2026-04-13 00:44:43.247961 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-13 00:44:43.247972 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.110) 0:00:45.539 ********** 2026-04-13 00:44:43.247983 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.247994 | orchestrator | 2026-04-13 00:44:43.248005 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-13 00:44:43.248017 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.121) 0:00:45.661 ********** 2026-04-13 00:44:43.248028 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:43.248039 | orchestrator |  "vgs_report": { 2026-04-13 00:44:43.248051 | orchestrator |  "vg": [] 2026-04-13 00:44:43.248062 | orchestrator |  } 2026-04-13 00:44:43.248074 | orchestrator | } 2026-04-13 00:44:43.248092 | orchestrator | 2026-04-13 00:44:43.248104 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-13 00:44:43.248115 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.152) 0:00:45.813 ********** 2026-04-13 00:44:43.248126 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.248138 | orchestrator | 2026-04-13 00:44:43.248149 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-13 00:44:43.248160 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.141) 0:00:45.955 ********** 2026-04-13 00:44:43.248171 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.248182 | orchestrator | 2026-04-13 00:44:43.248193 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-13 00:44:43.248205 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.146) 0:00:46.101 ********** 2026-04-13 00:44:43.248216 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.248227 | orchestrator | 2026-04-13 00:44:43.248238 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-13 00:44:43.248249 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.138) 0:00:46.240 ********** 2026-04-13 00:44:43.248261 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:43.248272 | orchestrator | 2026-04-13 00:44:43.248291 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-13 00:44:47.987421 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.143) 0:00:46.383 ********** 2026-04-13 00:44:47.987583 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987609 | orchestrator | 2026-04-13 00:44:47.987627 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-13 00:44:47.987641 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.134) 0:00:46.518 ********** 2026-04-13 00:44:47.987651 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987660 | orchestrator | 2026-04-13 00:44:47.987670 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-13 00:44:47.987679 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.372) 0:00:46.891 ********** 2026-04-13 00:44:47.987688 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987696 | orchestrator | 2026-04-13 00:44:47.987705 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-13 00:44:47.987715 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.132) 0:00:47.023 ********** 2026-04-13 00:44:47.987723 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987732 | orchestrator | 2026-04-13 00:44:47.987741 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-13 00:44:47.987750 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.118) 0:00:47.142 ********** 2026-04-13 00:44:47.987774 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987784 | orchestrator | 2026-04-13 00:44:47.987793 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-13 00:44:47.987802 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.141) 0:00:47.283 ********** 2026-04-13 00:44:47.987810 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987819 | orchestrator | 2026-04-13 00:44:47.987828 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-13 00:44:47.987837 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.140) 0:00:47.424 ********** 2026-04-13 00:44:47.987846 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987855 | orchestrator | 2026-04-13 00:44:47.987864 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-13 00:44:47.987873 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.135) 0:00:47.559 ********** 2026-04-13 00:44:47.987882 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987891 | orchestrator | 2026-04-13 00:44:47.987900 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-13 00:44:47.987909 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.129) 0:00:47.688 ********** 2026-04-13 00:44:47.987918 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987949 | orchestrator | 2026-04-13 00:44:47.987960 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-13 00:44:47.987970 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.140) 0:00:47.829 ********** 2026-04-13 00:44:47.987980 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.987990 | orchestrator | 2026-04-13 00:44:47.988001 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-13 00:44:47.988011 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.146) 0:00:47.975 ********** 2026-04-13 00:44:47.988021 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988034 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988043 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988053 | orchestrator | 2026-04-13 00:44:47.988063 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-13 00:44:47.988073 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.162) 0:00:48.138 ********** 2026-04-13 00:44:47.988083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988094 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988104 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988113 | orchestrator | 2026-04-13 00:44:47.988123 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-13 00:44:47.988133 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.154) 0:00:48.293 ********** 2026-04-13 00:44:47.988144 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988164 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988179 | orchestrator | 2026-04-13 00:44:47.988195 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-13 00:44:47.988209 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.154) 0:00:48.447 ********** 2026-04-13 00:44:47.988223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988255 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988270 | orchestrator | 2026-04-13 00:44:47.988307 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-13 00:44:47.988323 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.390) 0:00:48.838 ********** 2026-04-13 00:44:47.988339 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988370 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988385 | orchestrator | 2026-04-13 00:44:47.988401 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-13 00:44:47.988415 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.164) 0:00:49.002 ********** 2026-04-13 00:44:47.988440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988459 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988468 | orchestrator | 2026-04-13 00:44:47.988477 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-13 00:44:47.988550 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.152) 0:00:49.155 ********** 2026-04-13 00:44:47.988561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988572 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988588 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988602 | orchestrator | 2026-04-13 00:44:47.988616 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-13 00:44:47.988631 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.157) 0:00:49.312 ********** 2026-04-13 00:44:47.988646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.988660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.988675 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.988689 | orchestrator | 2026-04-13 00:44:47.988705 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-13 00:44:47.988722 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.153) 0:00:49.466 ********** 2026-04-13 00:44:47.988738 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:47.988753 | orchestrator | 2026-04-13 00:44:47.988769 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-13 00:44:47.988779 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.506) 0:00:49.972 ********** 2026-04-13 00:44:47.988787 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:47.988796 | orchestrator | 2026-04-13 00:44:47.988808 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-13 00:44:47.988823 | orchestrator | Monday 13 April 2026 00:44:47 +0000 (0:00:00.508) 0:00:50.481 ********** 2026-04-13 00:44:47.988839 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:47.988854 | orchestrator | 2026-04-13 00:44:47.988870 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-13 00:44:47.988885 | orchestrator | Monday 13 April 2026 00:44:47 +0000 (0:00:00.156) 0:00:50.637 ********** 2026-04-13 00:44:47.988901 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'vg_name': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'}) 2026-04-13 00:44:47.988920 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'vg_name': 'ceph-976187fe-8802-504d-92cd-339995e22605'}) 2026-04-13 00:44:47.988936 | orchestrator | 2026-04-13 00:44:47.988951 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-13 00:44:47.988966 | orchestrator | Monday 13 April 2026 00:44:47 +0000 (0:00:00.217) 0:00:50.855 ********** 2026-04-13 00:44:47.988975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.989023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:47.989033 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:47.989051 | orchestrator | 2026-04-13 00:44:47.989060 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-13 00:44:47.989069 | orchestrator | Monday 13 April 2026 00:44:47 +0000 (0:00:00.192) 0:00:51.048 ********** 2026-04-13 00:44:47.989078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:47.989098 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:54.969174 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:54.969288 | orchestrator | 2026-04-13 00:44:54.969306 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-13 00:44:54.969320 | orchestrator | Monday 13 April 2026 00:44:48 +0000 (0:00:00.202) 0:00:51.250 ********** 2026-04-13 00:44:54.969333 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'})  2026-04-13 00:44:54.969346 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'})  2026-04-13 00:44:54.969357 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:54.969369 | orchestrator | 2026-04-13 00:44:54.969381 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-13 00:44:54.969392 | orchestrator | Monday 13 April 2026 00:44:48 +0000 (0:00:00.186) 0:00:51.437 ********** 2026-04-13 00:44:54.969404 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:54.969416 | orchestrator |  "lvm_report": { 2026-04-13 00:44:54.969431 | orchestrator |  "lv": [ 2026-04-13 00:44:54.969469 | orchestrator |  { 2026-04-13 00:44:54.969549 | orchestrator |  "lv_name": "osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657", 2026-04-13 00:44:54.969571 | orchestrator |  "vg_name": "ceph-204a2e69-8032-57e4-80e8-bdb37f98e657" 2026-04-13 00:44:54.969588 | orchestrator |  }, 2026-04-13 00:44:54.969605 | orchestrator |  { 2026-04-13 00:44:54.969622 | orchestrator |  "lv_name": "osd-block-976187fe-8802-504d-92cd-339995e22605", 2026-04-13 00:44:54.969640 | orchestrator |  "vg_name": "ceph-976187fe-8802-504d-92cd-339995e22605" 2026-04-13 00:44:54.969658 | orchestrator |  } 2026-04-13 00:44:54.969677 | orchestrator |  ], 2026-04-13 00:44:54.969695 | orchestrator |  "pv": [ 2026-04-13 00:44:54.969713 | orchestrator |  { 2026-04-13 00:44:54.969733 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-13 00:44:54.969754 | orchestrator |  "vg_name": "ceph-976187fe-8802-504d-92cd-339995e22605" 2026-04-13 00:44:54.969773 | orchestrator |  }, 2026-04-13 00:44:54.969793 | orchestrator |  { 2026-04-13 00:44:54.969812 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-13 00:44:54.969831 | orchestrator |  "vg_name": "ceph-204a2e69-8032-57e4-80e8-bdb37f98e657" 2026-04-13 00:44:54.969853 | orchestrator |  } 2026-04-13 00:44:54.969873 | orchestrator |  ] 2026-04-13 00:44:54.969892 | orchestrator |  } 2026-04-13 00:44:54.969913 | orchestrator | } 2026-04-13 00:44:54.969934 | orchestrator | 2026-04-13 00:44:54.969954 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-13 00:44:54.969971 | orchestrator | 2026-04-13 00:44:54.969985 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:44:54.969998 | orchestrator | Monday 13 April 2026 00:44:48 +0000 (0:00:00.599) 0:00:52.036 ********** 2026-04-13 00:44:54.970011 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-13 00:44:54.970089 | orchestrator | 2026-04-13 00:44:54.970103 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:44:54.970116 | orchestrator | Monday 13 April 2026 00:44:49 +0000 (0:00:00.308) 0:00:52.344 ********** 2026-04-13 00:44:54.970155 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:44:54.970167 | orchestrator | 2026-04-13 00:44:54.970178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970190 | orchestrator | Monday 13 April 2026 00:44:49 +0000 (0:00:00.262) 0:00:52.607 ********** 2026-04-13 00:44:54.970201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:44:54.970213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:44:54.970224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:44:54.970240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:44:54.970252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:44:54.970263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:44:54.970275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:44:54.970286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:44:54.970297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-13 00:44:54.970309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:44:54.970320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:44:54.970331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:44:54.970342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:44:54.970354 | orchestrator | 2026-04-13 00:44:54.970365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970376 | orchestrator | Monday 13 April 2026 00:44:49 +0000 (0:00:00.437) 0:00:53.044 ********** 2026-04-13 00:44:54.970387 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970398 | orchestrator | 2026-04-13 00:44:54.970410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970421 | orchestrator | Monday 13 April 2026 00:44:50 +0000 (0:00:00.249) 0:00:53.294 ********** 2026-04-13 00:44:54.970432 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970444 | orchestrator | 2026-04-13 00:44:54.970455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970513 | orchestrator | Monday 13 April 2026 00:44:50 +0000 (0:00:00.230) 0:00:53.524 ********** 2026-04-13 00:44:54.970526 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970537 | orchestrator | 2026-04-13 00:44:54.970548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970560 | orchestrator | Monday 13 April 2026 00:44:50 +0000 (0:00:00.213) 0:00:53.738 ********** 2026-04-13 00:44:54.970571 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970582 | orchestrator | 2026-04-13 00:44:54.970593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970604 | orchestrator | Monday 13 April 2026 00:44:50 +0000 (0:00:00.194) 0:00:53.932 ********** 2026-04-13 00:44:54.970615 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970626 | orchestrator | 2026-04-13 00:44:54.970638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970649 | orchestrator | Monday 13 April 2026 00:44:51 +0000 (0:00:00.266) 0:00:54.199 ********** 2026-04-13 00:44:54.970661 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970672 | orchestrator | 2026-04-13 00:44:54.970683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970704 | orchestrator | Monday 13 April 2026 00:44:51 +0000 (0:00:00.900) 0:00:55.099 ********** 2026-04-13 00:44:54.970715 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970735 | orchestrator | 2026-04-13 00:44:54.970747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970758 | orchestrator | Monday 13 April 2026 00:44:52 +0000 (0:00:00.240) 0:00:55.339 ********** 2026-04-13 00:44:54.970770 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:54.970781 | orchestrator | 2026-04-13 00:44:54.970792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970803 | orchestrator | Monday 13 April 2026 00:44:52 +0000 (0:00:00.198) 0:00:55.538 ********** 2026-04-13 00:44:54.970814 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a) 2026-04-13 00:44:54.970827 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a) 2026-04-13 00:44:54.970838 | orchestrator | 2026-04-13 00:44:54.970849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970861 | orchestrator | Monday 13 April 2026 00:44:52 +0000 (0:00:00.554) 0:00:56.092 ********** 2026-04-13 00:44:54.970872 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a) 2026-04-13 00:44:54.970883 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a) 2026-04-13 00:44:54.970894 | orchestrator | 2026-04-13 00:44:54.970906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970917 | orchestrator | Monday 13 April 2026 00:44:53 +0000 (0:00:00.435) 0:00:56.528 ********** 2026-04-13 00:44:54.970928 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7) 2026-04-13 00:44:54.970939 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7) 2026-04-13 00:44:54.970951 | orchestrator | 2026-04-13 00:44:54.970962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.970973 | orchestrator | Monday 13 April 2026 00:44:53 +0000 (0:00:00.474) 0:00:57.002 ********** 2026-04-13 00:44:54.970984 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923) 2026-04-13 00:44:54.970995 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923) 2026-04-13 00:44:54.971006 | orchestrator | 2026-04-13 00:44:54.971018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:54.971029 | orchestrator | Monday 13 April 2026 00:44:54 +0000 (0:00:00.436) 0:00:57.439 ********** 2026-04-13 00:44:54.971040 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:44:54.971052 | orchestrator | 2026-04-13 00:44:54.971063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:54.971074 | orchestrator | Monday 13 April 2026 00:44:54 +0000 (0:00:00.341) 0:00:57.780 ********** 2026-04-13 00:44:54.971086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:44:54.971097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:44:54.971108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:44:54.971119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:44:54.971130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:44:54.971141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:44:54.971152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:44:54.971163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:44:54.971175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-13 00:44:54.971192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:44:54.971204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:44:54.971222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:45:03.789089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:45:03.789204 | orchestrator | 2026-04-13 00:45:03.789222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789235 | orchestrator | Monday 13 April 2026 00:44:55 +0000 (0:00:00.410) 0:00:58.191 ********** 2026-04-13 00:45:03.789247 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789260 | orchestrator | 2026-04-13 00:45:03.789272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789283 | orchestrator | Monday 13 April 2026 00:44:55 +0000 (0:00:00.193) 0:00:58.384 ********** 2026-04-13 00:45:03.789295 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789306 | orchestrator | 2026-04-13 00:45:03.789318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789329 | orchestrator | Monday 13 April 2026 00:44:55 +0000 (0:00:00.202) 0:00:58.587 ********** 2026-04-13 00:45:03.789341 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789352 | orchestrator | 2026-04-13 00:45:03.789363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789391 | orchestrator | Monday 13 April 2026 00:44:56 +0000 (0:00:00.683) 0:00:59.270 ********** 2026-04-13 00:45:03.789403 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789415 | orchestrator | 2026-04-13 00:45:03.789426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789437 | orchestrator | Monday 13 April 2026 00:44:56 +0000 (0:00:00.198) 0:00:59.468 ********** 2026-04-13 00:45:03.789449 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789460 | orchestrator | 2026-04-13 00:45:03.789504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789517 | orchestrator | Monday 13 April 2026 00:44:56 +0000 (0:00:00.197) 0:00:59.666 ********** 2026-04-13 00:45:03.789528 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789540 | orchestrator | 2026-04-13 00:45:03.789551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789577 | orchestrator | Monday 13 April 2026 00:44:56 +0000 (0:00:00.207) 0:00:59.873 ********** 2026-04-13 00:45:03.789599 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789611 | orchestrator | 2026-04-13 00:45:03.789624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789637 | orchestrator | Monday 13 April 2026 00:44:56 +0000 (0:00:00.195) 0:01:00.069 ********** 2026-04-13 00:45:03.789651 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789664 | orchestrator | 2026-04-13 00:45:03.789678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789692 | orchestrator | Monday 13 April 2026 00:44:57 +0000 (0:00:00.200) 0:01:00.270 ********** 2026-04-13 00:45:03.789705 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-13 00:45:03.789719 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-13 00:45:03.789733 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-13 00:45:03.789752 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-13 00:45:03.789771 | orchestrator | 2026-04-13 00:45:03.789791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789811 | orchestrator | Monday 13 April 2026 00:44:57 +0000 (0:00:00.654) 0:01:00.925 ********** 2026-04-13 00:45:03.789830 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789842 | orchestrator | 2026-04-13 00:45:03.789853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789888 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.222) 0:01:01.147 ********** 2026-04-13 00:45:03.789900 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789911 | orchestrator | 2026-04-13 00:45:03.789923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789934 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.240) 0:01:01.388 ********** 2026-04-13 00:45:03.789945 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.789956 | orchestrator | 2026-04-13 00:45:03.789967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:45:03.789978 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.242) 0:01:01.630 ********** 2026-04-13 00:45:03.789989 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790000 | orchestrator | 2026-04-13 00:45:03.790012 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-13 00:45:03.790081 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.195) 0:01:01.826 ********** 2026-04-13 00:45:03.790093 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790105 | orchestrator | 2026-04-13 00:45:03.790116 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-13 00:45:03.790127 | orchestrator | Monday 13 April 2026 00:44:59 +0000 (0:00:00.376) 0:01:02.202 ********** 2026-04-13 00:45:03.790138 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ae95053f-cfae-50f3-8301-23c2132e6da4'}}) 2026-04-13 00:45:03.790150 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}}) 2026-04-13 00:45:03.790161 | orchestrator | 2026-04-13 00:45:03.790173 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-13 00:45:03.790185 | orchestrator | Monday 13 April 2026 00:44:59 +0000 (0:00:00.194) 0:01:02.396 ********** 2026-04-13 00:45:03.790197 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'}) 2026-04-13 00:45:03.790210 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}) 2026-04-13 00:45:03.790222 | orchestrator | 2026-04-13 00:45:03.790234 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-13 00:45:03.790264 | orchestrator | Monday 13 April 2026 00:45:01 +0000 (0:00:01.784) 0:01:04.181 ********** 2026-04-13 00:45:03.790276 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:03.790289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:03.790300 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790312 | orchestrator | 2026-04-13 00:45:03.790323 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-13 00:45:03.790334 | orchestrator | Monday 13 April 2026 00:45:01 +0000 (0:00:00.162) 0:01:04.343 ********** 2026-04-13 00:45:03.790346 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'}) 2026-04-13 00:45:03.790357 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}) 2026-04-13 00:45:03.790369 | orchestrator | 2026-04-13 00:45:03.790380 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-13 00:45:03.790391 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:01.364) 0:01:05.707 ********** 2026-04-13 00:45:03.790403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:03.790436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:03.790448 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790459 | orchestrator | 2026-04-13 00:45:03.790470 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-13 00:45:03.790560 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:00.152) 0:01:05.860 ********** 2026-04-13 00:45:03.790571 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790582 | orchestrator | 2026-04-13 00:45:03.790593 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-13 00:45:03.790604 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:00.122) 0:01:05.983 ********** 2026-04-13 00:45:03.790615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:03.790627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:03.790638 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790649 | orchestrator | 2026-04-13 00:45:03.790660 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-13 00:45:03.790672 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:00.157) 0:01:06.140 ********** 2026-04-13 00:45:03.790683 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790694 | orchestrator | 2026-04-13 00:45:03.790705 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-13 00:45:03.790726 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.133) 0:01:06.274 ********** 2026-04-13 00:45:03.790738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:03.790749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:03.790761 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790772 | orchestrator | 2026-04-13 00:45:03.790783 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-13 00:45:03.790795 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.151) 0:01:06.425 ********** 2026-04-13 00:45:03.790806 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790817 | orchestrator | 2026-04-13 00:45:03.790829 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-13 00:45:03.790840 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.138) 0:01:06.563 ********** 2026-04-13 00:45:03.790851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:03.790862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:03.790874 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:03.790885 | orchestrator | 2026-04-13 00:45:03.790896 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-13 00:45:03.790907 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.160) 0:01:06.724 ********** 2026-04-13 00:45:03.790918 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:03.790930 | orchestrator | 2026-04-13 00:45:03.790941 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-13 00:45:03.790952 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.132) 0:01:06.857 ********** 2026-04-13 00:45:03.790972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:10.192462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:10.192674 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.192693 | orchestrator | 2026-04-13 00:45:10.192706 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-13 00:45:10.192720 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.404) 0:01:07.261 ********** 2026-04-13 00:45:10.192732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:10.192744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:10.192755 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.192767 | orchestrator | 2026-04-13 00:45:10.192794 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-13 00:45:10.192806 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.162) 0:01:07.424 ********** 2026-04-13 00:45:10.192818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:10.192830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:10.192841 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.192853 | orchestrator | 2026-04-13 00:45:10.192864 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-13 00:45:10.192876 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.159) 0:01:07.584 ********** 2026-04-13 00:45:10.192887 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.192898 | orchestrator | 2026-04-13 00:45:10.192910 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-13 00:45:10.192921 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.129) 0:01:07.714 ********** 2026-04-13 00:45:10.192932 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.192944 | orchestrator | 2026-04-13 00:45:10.192955 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-13 00:45:10.192968 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.143) 0:01:07.858 ********** 2026-04-13 00:45:10.192981 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.192995 | orchestrator | 2026-04-13 00:45:10.193008 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-13 00:45:10.193021 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.144) 0:01:08.003 ********** 2026-04-13 00:45:10.193033 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:10.193047 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-13 00:45:10.193060 | orchestrator | } 2026-04-13 00:45:10.193074 | orchestrator | 2026-04-13 00:45:10.193087 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-13 00:45:10.193099 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.142) 0:01:08.146 ********** 2026-04-13 00:45:10.193112 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:10.193126 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-13 00:45:10.193138 | orchestrator | } 2026-04-13 00:45:10.193151 | orchestrator | 2026-04-13 00:45:10.193164 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-13 00:45:10.193177 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.150) 0:01:08.296 ********** 2026-04-13 00:45:10.193190 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:10.193202 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-13 00:45:10.193215 | orchestrator | } 2026-04-13 00:45:10.193228 | orchestrator | 2026-04-13 00:45:10.193240 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-13 00:45:10.193253 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.145) 0:01:08.441 ********** 2026-04-13 00:45:10.193289 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:10.193304 | orchestrator | 2026-04-13 00:45:10.193318 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-13 00:45:10.193330 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.516) 0:01:08.958 ********** 2026-04-13 00:45:10.193341 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:10.193352 | orchestrator | 2026-04-13 00:45:10.193364 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-13 00:45:10.193375 | orchestrator | Monday 13 April 2026 00:45:06 +0000 (0:00:00.497) 0:01:09.455 ********** 2026-04-13 00:45:10.193387 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:10.193398 | orchestrator | 2026-04-13 00:45:10.193409 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-13 00:45:10.193421 | orchestrator | Monday 13 April 2026 00:45:06 +0000 (0:00:00.468) 0:01:09.924 ********** 2026-04-13 00:45:10.193432 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:10.193443 | orchestrator | 2026-04-13 00:45:10.193454 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-13 00:45:10.193485 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.380) 0:01:10.304 ********** 2026-04-13 00:45:10.193497 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193508 | orchestrator | 2026-04-13 00:45:10.193519 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-13 00:45:10.193531 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.109) 0:01:10.414 ********** 2026-04-13 00:45:10.193542 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193553 | orchestrator | 2026-04-13 00:45:10.193564 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-13 00:45:10.193575 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.133) 0:01:10.548 ********** 2026-04-13 00:45:10.193586 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:10.193597 | orchestrator |  "vgs_report": { 2026-04-13 00:45:10.193609 | orchestrator |  "vg": [] 2026-04-13 00:45:10.193639 | orchestrator |  } 2026-04-13 00:45:10.193652 | orchestrator | } 2026-04-13 00:45:10.193663 | orchestrator | 2026-04-13 00:45:10.193674 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-13 00:45:10.193686 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.145) 0:01:10.693 ********** 2026-04-13 00:45:10.193697 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193708 | orchestrator | 2026-04-13 00:45:10.193719 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-13 00:45:10.193731 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.128) 0:01:10.822 ********** 2026-04-13 00:45:10.193742 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193753 | orchestrator | 2026-04-13 00:45:10.193764 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-13 00:45:10.193775 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.161) 0:01:10.984 ********** 2026-04-13 00:45:10.193787 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193798 | orchestrator | 2026-04-13 00:45:10.193809 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-13 00:45:10.193826 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.134) 0:01:11.118 ********** 2026-04-13 00:45:10.193837 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193849 | orchestrator | 2026-04-13 00:45:10.193860 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-13 00:45:10.193871 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.127) 0:01:11.246 ********** 2026-04-13 00:45:10.193882 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193893 | orchestrator | 2026-04-13 00:45:10.193904 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-13 00:45:10.193916 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.140) 0:01:11.387 ********** 2026-04-13 00:45:10.193927 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.193955 | orchestrator | 2026-04-13 00:45:10.193975 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-13 00:45:10.193994 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.170) 0:01:11.557 ********** 2026-04-13 00:45:10.194013 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194121 | orchestrator | 2026-04-13 00:45:10.194133 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-13 00:45:10.194145 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.149) 0:01:11.707 ********** 2026-04-13 00:45:10.194156 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194167 | orchestrator | 2026-04-13 00:45:10.194179 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-13 00:45:10.194191 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.135) 0:01:11.843 ********** 2026-04-13 00:45:10.194202 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194213 | orchestrator | 2026-04-13 00:45:10.194225 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-13 00:45:10.194236 | orchestrator | Monday 13 April 2026 00:45:09 +0000 (0:00:00.400) 0:01:12.244 ********** 2026-04-13 00:45:10.194247 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194259 | orchestrator | 2026-04-13 00:45:10.194270 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-13 00:45:10.194281 | orchestrator | Monday 13 April 2026 00:45:09 +0000 (0:00:00.156) 0:01:12.400 ********** 2026-04-13 00:45:10.194292 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194304 | orchestrator | 2026-04-13 00:45:10.194315 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-13 00:45:10.194327 | orchestrator | Monday 13 April 2026 00:45:09 +0000 (0:00:00.146) 0:01:12.547 ********** 2026-04-13 00:45:10.194338 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194349 | orchestrator | 2026-04-13 00:45:10.194361 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-13 00:45:10.194372 | orchestrator | Monday 13 April 2026 00:45:09 +0000 (0:00:00.137) 0:01:12.684 ********** 2026-04-13 00:45:10.194383 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194394 | orchestrator | 2026-04-13 00:45:10.194406 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-13 00:45:10.194417 | orchestrator | Monday 13 April 2026 00:45:09 +0000 (0:00:00.135) 0:01:12.820 ********** 2026-04-13 00:45:10.194428 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194440 | orchestrator | 2026-04-13 00:45:10.194451 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-13 00:45:10.194463 | orchestrator | Monday 13 April 2026 00:45:09 +0000 (0:00:00.139) 0:01:12.959 ********** 2026-04-13 00:45:10.194502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:10.194514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:10.194525 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194537 | orchestrator | 2026-04-13 00:45:10.194548 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-13 00:45:10.194559 | orchestrator | Monday 13 April 2026 00:45:09 +0000 (0:00:00.153) 0:01:13.113 ********** 2026-04-13 00:45:10.194570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:10.194582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:10.194593 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:10.194604 | orchestrator | 2026-04-13 00:45:10.194615 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-13 00:45:10.194637 | orchestrator | Monday 13 April 2026 00:45:10 +0000 (0:00:00.151) 0:01:13.265 ********** 2026-04-13 00:45:10.194658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.301696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.301944 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.301975 | orchestrator | 2026-04-13 00:45:13.301997 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-13 00:45:13.302013 | orchestrator | Monday 13 April 2026 00:45:10 +0000 (0:00:00.156) 0:01:13.421 ********** 2026-04-13 00:45:13.302109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.302151 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.302171 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.302187 | orchestrator | 2026-04-13 00:45:13.302199 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-13 00:45:13.302211 | orchestrator | Monday 13 April 2026 00:45:10 +0000 (0:00:00.147) 0:01:13.568 ********** 2026-04-13 00:45:13.302230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.302242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.302253 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.302265 | orchestrator | 2026-04-13 00:45:13.302276 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-13 00:45:13.302287 | orchestrator | Monday 13 April 2026 00:45:10 +0000 (0:00:00.163) 0:01:13.732 ********** 2026-04-13 00:45:13.302299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.302310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.302321 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.302333 | orchestrator | 2026-04-13 00:45:13.302344 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-13 00:45:13.302355 | orchestrator | Monday 13 April 2026 00:45:10 +0000 (0:00:00.154) 0:01:13.886 ********** 2026-04-13 00:45:13.302367 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.302378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.302389 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.302401 | orchestrator | 2026-04-13 00:45:13.302412 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-13 00:45:13.302423 | orchestrator | Monday 13 April 2026 00:45:11 +0000 (0:00:00.448) 0:01:14.335 ********** 2026-04-13 00:45:13.302434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.302446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.302457 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.302528 | orchestrator | 2026-04-13 00:45:13.302543 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-13 00:45:13.302555 | orchestrator | Monday 13 April 2026 00:45:11 +0000 (0:00:00.157) 0:01:14.492 ********** 2026-04-13 00:45:13.302566 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:13.302578 | orchestrator | 2026-04-13 00:45:13.302590 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-13 00:45:13.302601 | orchestrator | Monday 13 April 2026 00:45:11 +0000 (0:00:00.492) 0:01:14.985 ********** 2026-04-13 00:45:13.302612 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:13.302624 | orchestrator | 2026-04-13 00:45:13.302635 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-13 00:45:13.302646 | orchestrator | Monday 13 April 2026 00:45:12 +0000 (0:00:00.519) 0:01:15.504 ********** 2026-04-13 00:45:13.302658 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:13.302669 | orchestrator | 2026-04-13 00:45:13.302680 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-13 00:45:13.302692 | orchestrator | Monday 13 April 2026 00:45:12 +0000 (0:00:00.150) 0:01:15.655 ********** 2026-04-13 00:45:13.302704 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'vg_name': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}) 2026-04-13 00:45:13.302716 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'vg_name': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'}) 2026-04-13 00:45:13.302728 | orchestrator | 2026-04-13 00:45:13.302739 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-13 00:45:13.302750 | orchestrator | Monday 13 April 2026 00:45:12 +0000 (0:00:00.166) 0:01:15.822 ********** 2026-04-13 00:45:13.302808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.302821 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.302833 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.302844 | orchestrator | 2026-04-13 00:45:13.302855 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-13 00:45:13.302867 | orchestrator | Monday 13 April 2026 00:45:12 +0000 (0:00:00.169) 0:01:15.991 ********** 2026-04-13 00:45:13.302878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.302890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.302902 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.302922 | orchestrator | 2026-04-13 00:45:13.302943 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-13 00:45:13.302962 | orchestrator | Monday 13 April 2026 00:45:13 +0000 (0:00:00.167) 0:01:16.159 ********** 2026-04-13 00:45:13.302982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'})  2026-04-13 00:45:13.303001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'})  2026-04-13 00:45:13.303023 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:13.303043 | orchestrator | 2026-04-13 00:45:13.303064 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-13 00:45:13.303086 | orchestrator | Monday 13 April 2026 00:45:13 +0000 (0:00:00.146) 0:01:16.305 ********** 2026-04-13 00:45:13.303106 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:13.303123 | orchestrator |  "lvm_report": { 2026-04-13 00:45:13.303135 | orchestrator |  "lv": [ 2026-04-13 00:45:13.303160 | orchestrator |  { 2026-04-13 00:45:13.303189 | orchestrator |  "lv_name": "osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d", 2026-04-13 00:45:13.303211 | orchestrator |  "vg_name": "ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d" 2026-04-13 00:45:13.303229 | orchestrator |  }, 2026-04-13 00:45:13.303246 | orchestrator |  { 2026-04-13 00:45:13.303263 | orchestrator |  "lv_name": "osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4", 2026-04-13 00:45:13.303281 | orchestrator |  "vg_name": "ceph-ae95053f-cfae-50f3-8301-23c2132e6da4" 2026-04-13 00:45:13.303299 | orchestrator |  } 2026-04-13 00:45:13.303317 | orchestrator |  ], 2026-04-13 00:45:13.303335 | orchestrator |  "pv": [ 2026-04-13 00:45:13.303353 | orchestrator |  { 2026-04-13 00:45:13.303372 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-13 00:45:13.303390 | orchestrator |  "vg_name": "ceph-ae95053f-cfae-50f3-8301-23c2132e6da4" 2026-04-13 00:45:13.303410 | orchestrator |  }, 2026-04-13 00:45:13.303429 | orchestrator |  { 2026-04-13 00:45:13.303447 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-13 00:45:13.303459 | orchestrator |  "vg_name": "ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d" 2026-04-13 00:45:13.303514 | orchestrator |  } 2026-04-13 00:45:13.303526 | orchestrator |  ] 2026-04-13 00:45:13.303538 | orchestrator |  } 2026-04-13 00:45:13.303550 | orchestrator | } 2026-04-13 00:45:13.303561 | orchestrator | 2026-04-13 00:45:13.303573 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:45:13.303584 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-13 00:45:13.303596 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-13 00:45:13.303608 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-13 00:45:13.303620 | orchestrator | 2026-04-13 00:45:13.303631 | orchestrator | 2026-04-13 00:45:13.303642 | orchestrator | 2026-04-13 00:45:13.303667 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:45:13.303679 | orchestrator | Monday 13 April 2026 00:45:13 +0000 (0:00:00.128) 0:01:16.434 ********** 2026-04-13 00:45:13.303690 | orchestrator | =============================================================================== 2026-04-13 00:45:13.303702 | orchestrator | Create block VGs -------------------------------------------------------- 5.83s 2026-04-13 00:45:13.303713 | orchestrator | Create block LVs -------------------------------------------------------- 4.22s 2026-04-13 00:45:13.303724 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.00s 2026-04-13 00:45:13.303736 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.63s 2026-04-13 00:45:13.303747 | orchestrator | Add known partitions to the list of available block devices ------------- 1.56s 2026-04-13 00:45:13.303758 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2026-04-13 00:45:13.303769 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-04-13 00:45:13.303781 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2026-04-13 00:45:13.303803 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2026-04-13 00:45:13.738455 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2026-04-13 00:45:13.738555 | orchestrator | Print LVM report data --------------------------------------------------- 1.06s 2026-04-13 00:45:13.738561 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2026-04-13 00:45:13.738567 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-04-13 00:45:13.738571 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-04-13 00:45:13.738593 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-04-13 00:45:13.738598 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2026-04-13 00:45:13.738613 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.78s 2026-04-13 00:45:13.738618 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.77s 2026-04-13 00:45:13.738622 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-04-13 00:45:13.738626 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.75s 2026-04-13 00:45:25.252864 | orchestrator | 2026-04-13 00:45:25 | INFO  | Prepare task for execution of facts. 2026-04-13 00:45:25.330612 | orchestrator | 2026-04-13 00:45:25 | INFO  | Task 309045ad-588a-4645-9ffa-0837726a586b (facts) was prepared for execution. 2026-04-13 00:45:25.330708 | orchestrator | 2026-04-13 00:45:25 | INFO  | It takes a moment until task 309045ad-588a-4645-9ffa-0837726a586b (facts) has been started and output is visible here. 2026-04-13 00:45:37.049717 | orchestrator | 2026-04-13 00:45:37.049797 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-13 00:45:37.049810 | orchestrator | 2026-04-13 00:45:37.049820 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-13 00:45:37.049829 | orchestrator | Monday 13 April 2026 00:45:28 +0000 (0:00:00.348) 0:00:00.348 ********** 2026-04-13 00:45:37.049838 | orchestrator | ok: [testbed-manager] 2026-04-13 00:45:37.049848 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:45:37.049856 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:45:37.049865 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:45:37.049873 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:45:37.049882 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:45:37.049890 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:37.049899 | orchestrator | 2026-04-13 00:45:37.049908 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-13 00:45:37.049916 | orchestrator | Monday 13 April 2026 00:45:30 +0000 (0:00:01.341) 0:00:01.690 ********** 2026-04-13 00:45:37.049925 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:45:37.049934 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:45:37.049943 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:45:37.049952 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:45:37.049960 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:45:37.049969 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:45:37.049977 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:37.049986 | orchestrator | 2026-04-13 00:45:37.049994 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:45:37.050003 | orchestrator | 2026-04-13 00:45:37.050012 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:45:37.050065 | orchestrator | Monday 13 April 2026 00:45:31 +0000 (0:00:01.210) 0:00:02.901 ********** 2026-04-13 00:45:37.050074 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:45:37.050082 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:45:37.050090 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:45:37.050099 | orchestrator | ok: [testbed-manager] 2026-04-13 00:45:37.050107 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:37.050115 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:45:37.050124 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:45:37.050132 | orchestrator | 2026-04-13 00:45:37.050140 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-13 00:45:37.050149 | orchestrator | 2026-04-13 00:45:37.050157 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-13 00:45:37.050166 | orchestrator | Monday 13 April 2026 00:45:36 +0000 (0:00:04.899) 0:00:07.800 ********** 2026-04-13 00:45:37.050174 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:45:37.050182 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:45:37.050206 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:45:37.050215 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:45:37.050223 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:45:37.050232 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:45:37.050240 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:37.050248 | orchestrator | 2026-04-13 00:45:37.050256 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:45:37.050265 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:37.050274 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:37.050282 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:37.050292 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:37.050301 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:37.050311 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:37.050321 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:37.050330 | orchestrator | 2026-04-13 00:45:37.050340 | orchestrator | 2026-04-13 00:45:37.050349 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:45:37.050359 | orchestrator | Monday 13 April 2026 00:45:36 +0000 (0:00:00.560) 0:00:08.361 ********** 2026-04-13 00:45:37.050368 | orchestrator | =============================================================================== 2026-04-13 00:45:37.050378 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2026-04-13 00:45:37.050387 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-04-13 00:45:37.050408 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-04-13 00:45:37.050418 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-04-13 00:45:48.419895 | orchestrator | 2026-04-13 00:45:48 | INFO  | Prepare task for execution of frr. 2026-04-13 00:45:48.510238 | orchestrator | 2026-04-13 00:45:48 | INFO  | Task 4d2540f3-0c22-409c-8341-5dc43914442a (frr) was prepared for execution. 2026-04-13 00:45:48.510338 | orchestrator | 2026-04-13 00:45:48 | INFO  | It takes a moment until task 4d2540f3-0c22-409c-8341-5dc43914442a (frr) has been started and output is visible here. 2026-04-13 00:46:12.111311 | orchestrator | 2026-04-13 00:46:12.111387 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-13 00:46:12.111402 | orchestrator | 2026-04-13 00:46:12.111412 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-13 00:46:12.111454 | orchestrator | Monday 13 April 2026 00:45:51 +0000 (0:00:00.282) 0:00:00.282 ********** 2026-04-13 00:46:12.111466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:46:12.111475 | orchestrator | 2026-04-13 00:46:12.111484 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-13 00:46:12.111492 | orchestrator | Monday 13 April 2026 00:45:51 +0000 (0:00:00.221) 0:00:00.504 ********** 2026-04-13 00:46:12.111501 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:12.111510 | orchestrator | 2026-04-13 00:46:12.111519 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-13 00:46:12.111547 | orchestrator | Monday 13 April 2026 00:45:53 +0000 (0:00:01.390) 0:00:01.894 ********** 2026-04-13 00:46:12.111556 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:12.111565 | orchestrator | 2026-04-13 00:46:12.111574 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-13 00:46:12.111583 | orchestrator | Monday 13 April 2026 00:46:02 +0000 (0:00:09.068) 0:00:10.962 ********** 2026-04-13 00:46:12.111592 | orchestrator | ok: [testbed-manager] 2026-04-13 00:46:12.111601 | orchestrator | 2026-04-13 00:46:12.111611 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-13 00:46:12.111620 | orchestrator | Monday 13 April 2026 00:46:03 +0000 (0:00:00.955) 0:00:11.917 ********** 2026-04-13 00:46:12.111628 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:12.111636 | orchestrator | 2026-04-13 00:46:12.111645 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-13 00:46:12.111653 | orchestrator | Monday 13 April 2026 00:46:04 +0000 (0:00:00.975) 0:00:12.892 ********** 2026-04-13 00:46:12.111662 | orchestrator | ok: [testbed-manager] 2026-04-13 00:46:12.111670 | orchestrator | 2026-04-13 00:46:12.111679 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-13 00:46:12.111687 | orchestrator | Monday 13 April 2026 00:46:05 +0000 (0:00:01.310) 0:00:14.203 ********** 2026-04-13 00:46:12.111695 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:12.111703 | orchestrator | 2026-04-13 00:46:12.111712 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-13 00:46:12.111720 | orchestrator | Monday 13 April 2026 00:46:05 +0000 (0:00:00.156) 0:00:14.360 ********** 2026-04-13 00:46:12.111728 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:12.111737 | orchestrator | 2026-04-13 00:46:12.111745 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-13 00:46:12.111753 | orchestrator | Monday 13 April 2026 00:46:05 +0000 (0:00:00.237) 0:00:14.598 ********** 2026-04-13 00:46:12.111761 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:12.111769 | orchestrator | 2026-04-13 00:46:12.111778 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-13 00:46:12.111787 | orchestrator | Monday 13 April 2026 00:46:06 +0000 (0:00:00.143) 0:00:14.741 ********** 2026-04-13 00:46:12.111795 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:12.111803 | orchestrator | 2026-04-13 00:46:12.111811 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-13 00:46:12.111819 | orchestrator | Monday 13 April 2026 00:46:06 +0000 (0:00:00.132) 0:00:14.874 ********** 2026-04-13 00:46:12.111827 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:12.111836 | orchestrator | 2026-04-13 00:46:12.111844 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-13 00:46:12.111852 | orchestrator | Monday 13 April 2026 00:46:06 +0000 (0:00:00.153) 0:00:15.027 ********** 2026-04-13 00:46:12.111861 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:12.111869 | orchestrator | 2026-04-13 00:46:12.111878 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-13 00:46:12.111886 | orchestrator | Monday 13 April 2026 00:46:07 +0000 (0:00:00.893) 0:00:15.921 ********** 2026-04-13 00:46:12.111894 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-13 00:46:12.111903 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-13 00:46:12.111912 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-13 00:46:12.111920 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-13 00:46:12.111929 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-13 00:46:12.111937 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-13 00:46:12.111954 | orchestrator | 2026-04-13 00:46:12.111962 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-13 00:46:12.111980 | orchestrator | Monday 13 April 2026 00:46:09 +0000 (0:00:02.125) 0:00:18.046 ********** 2026-04-13 00:46:12.111989 | orchestrator | ok: [testbed-manager] 2026-04-13 00:46:12.111997 | orchestrator | 2026-04-13 00:46:12.112005 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-13 00:46:12.112014 | orchestrator | Monday 13 April 2026 00:46:10 +0000 (0:00:01.146) 0:00:19.193 ********** 2026-04-13 00:46:12.112022 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:12.112031 | orchestrator | 2026-04-13 00:46:12.112039 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:46:12.112048 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 00:46:12.112057 | orchestrator | 2026-04-13 00:46:12.112066 | orchestrator | 2026-04-13 00:46:12.112089 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:46:12.112098 | orchestrator | Monday 13 April 2026 00:46:11 +0000 (0:00:01.306) 0:00:20.499 ********** 2026-04-13 00:46:12.112106 | orchestrator | =============================================================================== 2026-04-13 00:46:12.112114 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.07s 2026-04-13 00:46:12.112123 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.13s 2026-04-13 00:46:12.112130 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.39s 2026-04-13 00:46:12.112139 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.31s 2026-04-13 00:46:12.112147 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.31s 2026-04-13 00:46:12.112156 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.15s 2026-04-13 00:46:12.112164 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2026-04-13 00:46:12.112172 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.96s 2026-04-13 00:46:12.112180 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.89s 2026-04-13 00:46:12.112189 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.24s 2026-04-13 00:46:12.112197 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-04-13 00:46:12.112205 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-04-13 00:46:12.112214 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-13 00:46:12.112222 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-04-13 00:46:12.112230 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-13 00:46:12.241293 | orchestrator | 2026-04-13 00:46:12.242675 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Apr 13 00:46:12 UTC 2026 2026-04-13 00:46:12.242705 | orchestrator | 2026-04-13 00:46:13.340558 | orchestrator | 2026-04-13 00:46:13 | INFO  | Collection nutshell is prepared for execution 2026-04-13 00:46:13.475775 | orchestrator | 2026-04-13 00:46:13 | INFO  | A [0] - dotfiles 2026-04-13 00:46:23.577002 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [0] - homer 2026-04-13 00:46:23.577118 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [0] - netdata 2026-04-13 00:46:23.577142 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [0] - openstackclient 2026-04-13 00:46:23.577160 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [0] - phpmyadmin 2026-04-13 00:46:23.577178 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [0] - common 2026-04-13 00:46:23.581694 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- loadbalancer 2026-04-13 00:46:23.581757 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [2] --- opensearch 2026-04-13 00:46:23.582115 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [2] --- mariadb-ng 2026-04-13 00:46:23.582266 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [3] ---- horizon 2026-04-13 00:46:23.582777 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [3] ---- keystone 2026-04-13 00:46:23.583166 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- neutron 2026-04-13 00:46:23.583671 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [5] ------ wait-for-nova 2026-04-13 00:46:23.583911 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [6] ------- octavia 2026-04-13 00:46:23.585642 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- barbican 2026-04-13 00:46:23.587045 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- designate 2026-04-13 00:46:23.587098 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- ironic 2026-04-13 00:46:23.587112 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- placement 2026-04-13 00:46:23.587132 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- magnum 2026-04-13 00:46:23.587956 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- openvswitch 2026-04-13 00:46:23.588205 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [2] --- ovn 2026-04-13 00:46:23.588698 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- memcached 2026-04-13 00:46:23.589372 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- redis 2026-04-13 00:46:23.589396 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- rabbitmq-ng 2026-04-13 00:46:23.589816 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [0] - kubernetes 2026-04-13 00:46:23.592580 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- kubeconfig 2026-04-13 00:46:23.592805 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- copy-kubeconfig 2026-04-13 00:46:23.592828 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [0] - ceph 2026-04-13 00:46:23.595321 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [1] -- ceph-pools 2026-04-13 00:46:23.595365 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [2] --- copy-ceph-keys 2026-04-13 00:46:23.595380 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [3] ---- cephclient 2026-04-13 00:46:23.595838 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-13 00:46:23.595984 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- wait-for-keystone 2026-04-13 00:46:23.596021 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-13 00:46:23.596291 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [5] ------ glance 2026-04-13 00:46:23.596322 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [5] ------ cinder 2026-04-13 00:46:23.596900 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [5] ------ nova 2026-04-13 00:46:23.596923 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [4] ----- prometheus 2026-04-13 00:46:23.597278 | orchestrator | 2026-04-13 00:46:23 | INFO  | A [5] ------ grafana 2026-04-13 00:46:23.810923 | orchestrator | 2026-04-13 00:46:23 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-13 00:46:23.812819 | orchestrator | 2026-04-13 00:46:23 | INFO  | Tasks are running in the background 2026-04-13 00:46:25.382987 | orchestrator | 2026-04-13 00:46:25 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-13 00:46:27.591572 | orchestrator | 2026-04-13 00:46:27 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:27.595294 | orchestrator | 2026-04-13 00:46:27 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:27.595372 | orchestrator | 2026-04-13 00:46:27 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:27.595651 | orchestrator | 2026-04-13 00:46:27 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:27.598799 | orchestrator | 2026-04-13 00:46:27 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:27.598970 | orchestrator | 2026-04-13 00:46:27 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:27.599765 | orchestrator | 2026-04-13 00:46:27 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:27.600510 | orchestrator | 2026-04-13 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:30.670465 | orchestrator | 2026-04-13 00:46:30 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:30.670718 | orchestrator | 2026-04-13 00:46:30 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:30.671392 | orchestrator | 2026-04-13 00:46:30 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:30.672254 | orchestrator | 2026-04-13 00:46:30 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:30.673124 | orchestrator | 2026-04-13 00:46:30 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:30.673748 | orchestrator | 2026-04-13 00:46:30 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:30.678180 | orchestrator | 2026-04-13 00:46:30 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:30.678256 | orchestrator | 2026-04-13 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:33.725973 | orchestrator | 2026-04-13 00:46:33 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:33.726138 | orchestrator | 2026-04-13 00:46:33 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:33.728558 | orchestrator | 2026-04-13 00:46:33 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:33.729045 | orchestrator | 2026-04-13 00:46:33 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:33.729637 | orchestrator | 2026-04-13 00:46:33 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:33.730296 | orchestrator | 2026-04-13 00:46:33 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:33.730901 | orchestrator | 2026-04-13 00:46:33 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:33.730925 | orchestrator | 2026-04-13 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:36.804739 | orchestrator | 2026-04-13 00:46:36 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:36.819777 | orchestrator | 2026-04-13 00:46:36 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:36.829378 | orchestrator | 2026-04-13 00:46:36 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:36.837778 | orchestrator | 2026-04-13 00:46:36 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:36.846340 | orchestrator | 2026-04-13 00:46:36 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:36.856997 | orchestrator | 2026-04-13 00:46:36 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:36.864366 | orchestrator | 2026-04-13 00:46:36 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:36.864463 | orchestrator | 2026-04-13 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:39.961555 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:39.962611 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:39.983572 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:39.987486 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:39.987948 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:39.988701 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:39.993430 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:39.993477 | orchestrator | 2026-04-13 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:43.113486 | orchestrator | 2026-04-13 00:46:43 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:43.113608 | orchestrator | 2026-04-13 00:46:43 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:43.113630 | orchestrator | 2026-04-13 00:46:43 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:43.113654 | orchestrator | 2026-04-13 00:46:43 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:43.113669 | orchestrator | 2026-04-13 00:46:43 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:43.113683 | orchestrator | 2026-04-13 00:46:43 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:43.113696 | orchestrator | 2026-04-13 00:46:43 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:43.113711 | orchestrator | 2026-04-13 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:46.126282 | orchestrator | 2026-04-13 00:46:46 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:46.134091 | orchestrator | 2026-04-13 00:46:46 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:46.136981 | orchestrator | 2026-04-13 00:46:46 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:46.137494 | orchestrator | 2026-04-13 00:46:46 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:46.140039 | orchestrator | 2026-04-13 00:46:46 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:46.141725 | orchestrator | 2026-04-13 00:46:46 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:46.143973 | orchestrator | 2026-04-13 00:46:46 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:46.144058 | orchestrator | 2026-04-13 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:49.198998 | orchestrator | 2026-04-13 00:46:49 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:49.201001 | orchestrator | 2026-04-13 00:46:49 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:49.202214 | orchestrator | 2026-04-13 00:46:49 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:49.204600 | orchestrator | 2026-04-13 00:46:49 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state STARTED 2026-04-13 00:46:49.206357 | orchestrator | 2026-04-13 00:46:49 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:49.206556 | orchestrator | 2026-04-13 00:46:49 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:49.208460 | orchestrator | 2026-04-13 00:46:49 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:49.208507 | orchestrator | 2026-04-13 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:52.458995 | orchestrator | 2026-04-13 00:46:52.459081 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-13 00:46:52.459099 | orchestrator | 2026-04-13 00:46:52.459111 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-13 00:46:52.459123 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:00.977) 0:00:00.977 ********** 2026-04-13 00:46:52.459135 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:46:52.459147 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:46:52.459158 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:46:52.459169 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:46:52.459180 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:46:52.459191 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:46:52.459202 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:52.459216 | orchestrator | 2026-04-13 00:46:52.459241 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-13 00:46:52.459268 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:05.376) 0:00:06.354 ********** 2026-04-13 00:46:52.459287 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-13 00:46:52.459306 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-13 00:46:52.459325 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-13 00:46:52.459345 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-13 00:46:52.459365 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-13 00:46:52.459384 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-13 00:46:52.459509 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-13 00:46:52.459522 | orchestrator | 2026-04-13 00:46:52.459535 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-13 00:46:52.459549 | orchestrator | Monday 13 April 2026 00:46:41 +0000 (0:00:01.649) 0:00:08.003 ********** 2026-04-13 00:46:52.459566 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:40.844259', 'end': '2026-04-13 00:46:40.855008', 'delta': '0:00:00.010749', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:52.459593 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:40.688045', 'end': '2026-04-13 00:46:40.695275', 'delta': '0:00:00.007230', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:52.459635 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:40.579290', 'end': '2026-04-13 00:46:40.584794', 'delta': '0:00:00.005504', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:52.459676 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:40.693692', 'end': '2026-04-13 00:46:40.701913', 'delta': '0:00:00.008221', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:52.459690 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:40.673598', 'end': '2026-04-13 00:46:40.681665', 'delta': '0:00:00.008067', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:52.459702 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:41.083981', 'end': '2026-04-13 00:46:41.092433', 'delta': '0:00:00.008452', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:52.459713 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:41.177349', 'end': '2026-04-13 00:46:41.184198', 'delta': '0:00:00.006849', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:52.459734 | orchestrator | 2026-04-13 00:46:52.459746 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-13 00:46:52.459757 | orchestrator | Monday 13 April 2026 00:46:44 +0000 (0:00:02.974) 0:00:10.977 ********** 2026-04-13 00:46:52.459769 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-13 00:46:52.459780 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-13 00:46:52.459791 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-13 00:46:52.459802 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-13 00:46:52.459814 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-13 00:46:52.459833 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-13 00:46:52.459859 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-13 00:46:52.459883 | orchestrator | 2026-04-13 00:46:52.459902 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-13 00:46:52.459920 | orchestrator | Monday 13 April 2026 00:46:46 +0000 (0:00:02.173) 0:00:13.151 ********** 2026-04-13 00:46:52.459938 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-13 00:46:52.459955 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-13 00:46:52.459972 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-13 00:46:52.459990 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-13 00:46:52.460008 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-13 00:46:52.460026 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-13 00:46:52.460045 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-13 00:46:52.460064 | orchestrator | 2026-04-13 00:46:52.460082 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:46:52.460114 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:52.460136 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:52.460155 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:52.460175 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:52.460194 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:52.460214 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:52.460234 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:52.460253 | orchestrator | 2026-04-13 00:46:52.460273 | orchestrator | 2026-04-13 00:46:52.460293 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:46:52.460312 | orchestrator | Monday 13 April 2026 00:46:49 +0000 (0:00:03.136) 0:00:16.287 ********** 2026-04-13 00:46:52.460330 | orchestrator | =============================================================================== 2026-04-13 00:46:52.460342 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.38s 2026-04-13 00:46:52.460364 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.14s 2026-04-13 00:46:52.460376 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.97s 2026-04-13 00:46:52.460387 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.17s 2026-04-13 00:46:52.460425 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.65s 2026-04-13 00:46:52.460437 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:52.460449 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:52.460460 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:52.460472 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:46:52.460483 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task 7e454d36-1bb6-482a-a08d-6cb5284377c5 is in state SUCCESS 2026-04-13 00:46:52.460494 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:52.460505 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:52.460516 | orchestrator | 2026-04-13 00:46:52 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:52.460528 | orchestrator | 2026-04-13 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:55.480273 | orchestrator | 2026-04-13 00:46:55 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:55.480355 | orchestrator | 2026-04-13 00:46:55 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:55.480374 | orchestrator | 2026-04-13 00:46:55 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:55.480430 | orchestrator | 2026-04-13 00:46:55 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:46:55.480463 | orchestrator | 2026-04-13 00:46:55 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:55.480480 | orchestrator | 2026-04-13 00:46:55 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:55.480495 | orchestrator | 2026-04-13 00:46:55 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:55.480511 | orchestrator | 2026-04-13 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:58.513920 | orchestrator | 2026-04-13 00:46:58 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:46:58.516250 | orchestrator | 2026-04-13 00:46:58 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:46:58.518298 | orchestrator | 2026-04-13 00:46:58 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:46:58.519683 | orchestrator | 2026-04-13 00:46:58 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:46:58.522170 | orchestrator | 2026-04-13 00:46:58 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:46:58.524145 | orchestrator | 2026-04-13 00:46:58 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:46:58.529230 | orchestrator | 2026-04-13 00:46:58 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:46:58.529285 | orchestrator | 2026-04-13 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:01.601018 | orchestrator | 2026-04-13 00:47:01 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:01.611463 | orchestrator | 2026-04-13 00:47:01 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:01.618645 | orchestrator | 2026-04-13 00:47:01 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:01.626895 | orchestrator | 2026-04-13 00:47:01 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:01.635735 | orchestrator | 2026-04-13 00:47:01 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:01.639001 | orchestrator | 2026-04-13 00:47:01 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:47:01.644052 | orchestrator | 2026-04-13 00:47:01 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:01.648224 | orchestrator | 2026-04-13 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:04.832703 | orchestrator | 2026-04-13 00:47:04 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:04.832938 | orchestrator | 2026-04-13 00:47:04 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:04.838148 | orchestrator | 2026-04-13 00:47:04 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:04.840885 | orchestrator | 2026-04-13 00:47:04 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:04.883496 | orchestrator | 2026-04-13 00:47:04 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:04.883573 | orchestrator | 2026-04-13 00:47:04 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:47:04.883582 | orchestrator | 2026-04-13 00:47:04 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:04.883590 | orchestrator | 2026-04-13 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:07.945534 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:07.946545 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:07.946839 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:07.948425 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:07.952576 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:07.958781 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:47:07.959250 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:07.959270 | orchestrator | 2026-04-13 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:11.299509 | orchestrator | 2026-04-13 00:47:11 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:11.307142 | orchestrator | 2026-04-13 00:47:11 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:11.307216 | orchestrator | 2026-04-13 00:47:11 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:11.312080 | orchestrator | 2026-04-13 00:47:11 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:11.316265 | orchestrator | 2026-04-13 00:47:11 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:11.316308 | orchestrator | 2026-04-13 00:47:11 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:47:11.316319 | orchestrator | 2026-04-13 00:47:11 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:11.316325 | orchestrator | 2026-04-13 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:14.399880 | orchestrator | 2026-04-13 00:47:14 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:14.401639 | orchestrator | 2026-04-13 00:47:14 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:14.405745 | orchestrator | 2026-04-13 00:47:14 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:14.410119 | orchestrator | 2026-04-13 00:47:14 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:14.413083 | orchestrator | 2026-04-13 00:47:14 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:14.415632 | orchestrator | 2026-04-13 00:47:14 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state STARTED 2026-04-13 00:47:14.417789 | orchestrator | 2026-04-13 00:47:14 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:14.417919 | orchestrator | 2026-04-13 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:17.539698 | orchestrator | 2026-04-13 00:47:17 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:17.546118 | orchestrator | 2026-04-13 00:47:17 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:17.552415 | orchestrator | 2026-04-13 00:47:17 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:17.559604 | orchestrator | 2026-04-13 00:47:17 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:17.565623 | orchestrator | 2026-04-13 00:47:17 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:17.571246 | orchestrator | 2026-04-13 00:47:17 | INFO  | Task 163f1165-f5bd-48a4-9d41-c4ccea7d1870 is in state SUCCESS 2026-04-13 00:47:17.573665 | orchestrator | 2026-04-13 00:47:17 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:17.573702 | orchestrator | 2026-04-13 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:20.763598 | orchestrator | 2026-04-13 00:47:20 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:20.764071 | orchestrator | 2026-04-13 00:47:20 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:20.764115 | orchestrator | 2026-04-13 00:47:20 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:20.764137 | orchestrator | 2026-04-13 00:47:20 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:20.764158 | orchestrator | 2026-04-13 00:47:20 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:20.764177 | orchestrator | 2026-04-13 00:47:20 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:20.764197 | orchestrator | 2026-04-13 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:23.890225 | orchestrator | 2026-04-13 00:47:23 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:23.890351 | orchestrator | 2026-04-13 00:47:23 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:23.890422 | orchestrator | 2026-04-13 00:47:23 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:23.890437 | orchestrator | 2026-04-13 00:47:23 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:23.890449 | orchestrator | 2026-04-13 00:47:23 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:23.890461 | orchestrator | 2026-04-13 00:47:23 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:23.890472 | orchestrator | 2026-04-13 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:26.942113 | orchestrator | 2026-04-13 00:47:26 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:26.944005 | orchestrator | 2026-04-13 00:47:26 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:26.946355 | orchestrator | 2026-04-13 00:47:26 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state STARTED 2026-04-13 00:47:26.948979 | orchestrator | 2026-04-13 00:47:26 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:26.951885 | orchestrator | 2026-04-13 00:47:26 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:26.953207 | orchestrator | 2026-04-13 00:47:26 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:26.953405 | orchestrator | 2026-04-13 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:30.030653 | orchestrator | 2026-04-13 00:47:30 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:30.032517 | orchestrator | 2026-04-13 00:47:30 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:30.033580 | orchestrator | 2026-04-13 00:47:30 | INFO  | Task b60af66e-f38d-41b4-a124-38a026f3bcae is in state SUCCESS 2026-04-13 00:47:30.034140 | orchestrator | 2026-04-13 00:47:30 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:30.035102 | orchestrator | 2026-04-13 00:47:30 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:30.037478 | orchestrator | 2026-04-13 00:47:30 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:30.037535 | orchestrator | 2026-04-13 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:33.080010 | orchestrator | 2026-04-13 00:47:33 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:33.080219 | orchestrator | 2026-04-13 00:47:33 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:33.082700 | orchestrator | 2026-04-13 00:47:33 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:33.082836 | orchestrator | 2026-04-13 00:47:33 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:33.082861 | orchestrator | 2026-04-13 00:47:33 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:33.082878 | orchestrator | 2026-04-13 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:36.120997 | orchestrator | 2026-04-13 00:47:36 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:36.121243 | orchestrator | 2026-04-13 00:47:36 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:36.121976 | orchestrator | 2026-04-13 00:47:36 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:36.122809 | orchestrator | 2026-04-13 00:47:36 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:36.123660 | orchestrator | 2026-04-13 00:47:36 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:36.123713 | orchestrator | 2026-04-13 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:39.166082 | orchestrator | 2026-04-13 00:47:39 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:39.166195 | orchestrator | 2026-04-13 00:47:39 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:39.169068 | orchestrator | 2026-04-13 00:47:39 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:39.169731 | orchestrator | 2026-04-13 00:47:39 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:39.170530 | orchestrator | 2026-04-13 00:47:39 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:39.170562 | orchestrator | 2026-04-13 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:42.264893 | orchestrator | 2026-04-13 00:47:42 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:42.265560 | orchestrator | 2026-04-13 00:47:42 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:42.266573 | orchestrator | 2026-04-13 00:47:42 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:42.267681 | orchestrator | 2026-04-13 00:47:42 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:42.268933 | orchestrator | 2026-04-13 00:47:42 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:42.268964 | orchestrator | 2026-04-13 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:45.323127 | orchestrator | 2026-04-13 00:47:45 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:45.323313 | orchestrator | 2026-04-13 00:47:45 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:45.324299 | orchestrator | 2026-04-13 00:47:45 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:45.327676 | orchestrator | 2026-04-13 00:47:45 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:45.327723 | orchestrator | 2026-04-13 00:47:45 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:45.327735 | orchestrator | 2026-04-13 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:48.392258 | orchestrator | 2026-04-13 00:47:48 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:48.392406 | orchestrator | 2026-04-13 00:47:48 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:48.392426 | orchestrator | 2026-04-13 00:47:48 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:48.394648 | orchestrator | 2026-04-13 00:47:48 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:48.397775 | orchestrator | 2026-04-13 00:47:48 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:48.397828 | orchestrator | 2026-04-13 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:51.446494 | orchestrator | 2026-04-13 00:47:51 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:51.449076 | orchestrator | 2026-04-13 00:47:51 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:51.452704 | orchestrator | 2026-04-13 00:47:51 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:51.457562 | orchestrator | 2026-04-13 00:47:51 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:51.463463 | orchestrator | 2026-04-13 00:47:51 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:51.463535 | orchestrator | 2026-04-13 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:54.546773 | orchestrator | 2026-04-13 00:47:54 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:54.546856 | orchestrator | 2026-04-13 00:47:54 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:54.546862 | orchestrator | 2026-04-13 00:47:54 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:54.547909 | orchestrator | 2026-04-13 00:47:54 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:54.549986 | orchestrator | 2026-04-13 00:47:54 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:54.550079 | orchestrator | 2026-04-13 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:57.619873 | orchestrator | 2026-04-13 00:47:57 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:47:57.623436 | orchestrator | 2026-04-13 00:47:57 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:47:57.625224 | orchestrator | 2026-04-13 00:47:57 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state STARTED 2026-04-13 00:47:57.627293 | orchestrator | 2026-04-13 00:47:57 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:47:57.629330 | orchestrator | 2026-04-13 00:47:57 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state STARTED 2026-04-13 00:47:57.629375 | orchestrator | 2026-04-13 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:00.689198 | orchestrator | 2026-04-13 00:48:00 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:00.695400 | orchestrator | 2026-04-13 00:48:00 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:00.696888 | orchestrator | 2026-04-13 00:48:00.696958 | orchestrator | 2026-04-13 00:48:00.696981 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-13 00:48:00.697002 | orchestrator | 2026-04-13 00:48:00.697022 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-13 00:48:00.697042 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:00.571) 0:00:00.572 ********** 2026-04-13 00:48:00.697061 | orchestrator | ok: [testbed-manager] => { 2026-04-13 00:48:00.697084 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-13 00:48:00.697104 | orchestrator | } 2026-04-13 00:48:00.697124 | orchestrator | 2026-04-13 00:48:00.697143 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-13 00:48:00.697164 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:00.130) 0:00:00.702 ********** 2026-04-13 00:48:00.697183 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.697204 | orchestrator | 2026-04-13 00:48:00.697225 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-13 00:48:00.697284 | orchestrator | Monday 13 April 2026 00:46:36 +0000 (0:00:02.318) 0:00:03.021 ********** 2026-04-13 00:48:00.697306 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-13 00:48:00.697325 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-13 00:48:00.697344 | orchestrator | 2026-04-13 00:48:00.697394 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-13 00:48:00.697414 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:02.246) 0:00:05.267 ********** 2026-04-13 00:48:00.697433 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.697451 | orchestrator | 2026-04-13 00:48:00.697489 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-13 00:48:00.697510 | orchestrator | Monday 13 April 2026 00:46:43 +0000 (0:00:04.670) 0:00:09.937 ********** 2026-04-13 00:48:00.697529 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.697551 | orchestrator | 2026-04-13 00:48:00.697571 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-13 00:48:00.697593 | orchestrator | Monday 13 April 2026 00:46:44 +0000 (0:00:01.296) 0:00:11.234 ********** 2026-04-13 00:48:00.697614 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-13 00:48:00.697635 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.697660 | orchestrator | 2026-04-13 00:48:00.697678 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-13 00:48:00.697695 | orchestrator | Monday 13 April 2026 00:47:11 +0000 (0:00:26.747) 0:00:37.982 ********** 2026-04-13 00:48:00.697714 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.697806 | orchestrator | 2026-04-13 00:48:00.697832 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:00.697851 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.697873 | orchestrator | 2026-04-13 00:48:00.697892 | orchestrator | 2026-04-13 00:48:00.697912 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:00.697933 | orchestrator | Monday 13 April 2026 00:47:15 +0000 (0:00:04.205) 0:00:42.188 ********** 2026-04-13 00:48:00.697953 | orchestrator | =============================================================================== 2026-04-13 00:48:00.697971 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.75s 2026-04-13 00:48:00.697989 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.67s 2026-04-13 00:48:00.698009 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.21s 2026-04-13 00:48:00.698125 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.32s 2026-04-13 00:48:00.698147 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.25s 2026-04-13 00:48:00.698168 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.30s 2026-04-13 00:48:00.698189 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.13s 2026-04-13 00:48:00.698210 | orchestrator | 2026-04-13 00:48:00.698231 | orchestrator | 2026-04-13 00:48:00.698252 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-13 00:48:00.698274 | orchestrator | 2026-04-13 00:48:00.698296 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-13 00:48:00.698317 | orchestrator | Monday 13 April 2026 00:46:33 +0000 (0:00:00.358) 0:00:00.358 ********** 2026-04-13 00:48:00.698339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-13 00:48:00.698407 | orchestrator | 2026-04-13 00:48:00.698429 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-13 00:48:00.698447 | orchestrator | Monday 13 April 2026 00:46:33 +0000 (0:00:00.234) 0:00:00.592 ********** 2026-04-13 00:48:00.698467 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-13 00:48:00.698686 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-13 00:48:00.698714 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-13 00:48:00.698734 | orchestrator | 2026-04-13 00:48:00.698753 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-13 00:48:00.698772 | orchestrator | Monday 13 April 2026 00:46:35 +0000 (0:00:02.012) 0:00:02.605 ********** 2026-04-13 00:48:00.698791 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.698811 | orchestrator | 2026-04-13 00:48:00.698831 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-13 00:48:00.698850 | orchestrator | Monday 13 April 2026 00:46:38 +0000 (0:00:02.574) 0:00:05.179 ********** 2026-04-13 00:48:00.698893 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-13 00:48:00.698914 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.698934 | orchestrator | 2026-04-13 00:48:00.698954 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-13 00:48:00.698972 | orchestrator | Monday 13 April 2026 00:47:11 +0000 (0:00:33.709) 0:00:38.888 ********** 2026-04-13 00:48:00.698991 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.699009 | orchestrator | 2026-04-13 00:48:00.699027 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-13 00:48:00.699047 | orchestrator | Monday 13 April 2026 00:47:16 +0000 (0:00:04.592) 0:00:43.481 ********** 2026-04-13 00:48:00.699062 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.699077 | orchestrator | 2026-04-13 00:48:00.699093 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-13 00:48:00.699109 | orchestrator | Monday 13 April 2026 00:47:17 +0000 (0:00:01.226) 0:00:44.708 ********** 2026-04-13 00:48:00.699125 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.699140 | orchestrator | 2026-04-13 00:48:00.699157 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-13 00:48:00.699175 | orchestrator | Monday 13 April 2026 00:47:20 +0000 (0:00:02.431) 0:00:47.139 ********** 2026-04-13 00:48:00.699191 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.699208 | orchestrator | 2026-04-13 00:48:00.699225 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-13 00:48:00.699242 | orchestrator | Monday 13 April 2026 00:47:22 +0000 (0:00:02.426) 0:00:49.566 ********** 2026-04-13 00:48:00.699259 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.699276 | orchestrator | 2026-04-13 00:48:00.699309 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-13 00:48:00.699328 | orchestrator | Monday 13 April 2026 00:47:26 +0000 (0:00:03.744) 0:00:53.311 ********** 2026-04-13 00:48:00.699346 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.699398 | orchestrator | 2026-04-13 00:48:00.699418 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:00.699434 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.699454 | orchestrator | 2026-04-13 00:48:00.699471 | orchestrator | 2026-04-13 00:48:00.699491 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:00.699509 | orchestrator | Monday 13 April 2026 00:47:26 +0000 (0:00:00.478) 0:00:53.790 ********** 2026-04-13 00:48:00.699529 | orchestrator | =============================================================================== 2026-04-13 00:48:00.699550 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.71s 2026-04-13 00:48:00.699568 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 4.59s 2026-04-13 00:48:00.699587 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 3.74s 2026-04-13 00:48:00.699607 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.56s 2026-04-13 00:48:00.699626 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.43s 2026-04-13 00:48:00.699663 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.43s 2026-04-13 00:48:00.699683 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.01s 2026-04-13 00:48:00.699701 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.23s 2026-04-13 00:48:00.699719 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.48s 2026-04-13 00:48:00.699738 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2026-04-13 00:48:00.699758 | orchestrator | 2026-04-13 00:48:00.699776 | orchestrator | 2026-04-13 00:48:00.699794 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-13 00:48:00.699814 | orchestrator | 2026-04-13 00:48:00.699835 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-13 00:48:00.699854 | orchestrator | Monday 13 April 2026 00:46:54 +0000 (0:00:00.651) 0:00:00.651 ********** 2026-04-13 00:48:00.699872 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.699890 | orchestrator | 2026-04-13 00:48:00.699910 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-13 00:48:00.699928 | orchestrator | Monday 13 April 2026 00:46:56 +0000 (0:00:01.343) 0:00:01.995 ********** 2026-04-13 00:48:00.699947 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-13 00:48:00.699966 | orchestrator | 2026-04-13 00:48:00.699985 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-13 00:48:00.700005 | orchestrator | Monday 13 April 2026 00:46:57 +0000 (0:00:00.962) 0:00:02.957 ********** 2026-04-13 00:48:00.700023 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.700041 | orchestrator | 2026-04-13 00:48:00.700060 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-13 00:48:00.700078 | orchestrator | Monday 13 April 2026 00:46:58 +0000 (0:00:01.536) 0:00:04.494 ********** 2026-04-13 00:48:00.700097 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-13 00:48:00.700117 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.700136 | orchestrator | 2026-04-13 00:48:00.700155 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-13 00:48:00.700177 | orchestrator | Monday 13 April 2026 00:47:52 +0000 (0:00:53.350) 0:00:57.844 ********** 2026-04-13 00:48:00.700199 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.700218 | orchestrator | 2026-04-13 00:48:00.700237 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:00.700258 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.700277 | orchestrator | 2026-04-13 00:48:00.700295 | orchestrator | 2026-04-13 00:48:00.700315 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:00.700381 | orchestrator | Monday 13 April 2026 00:47:59 +0000 (0:00:07.895) 0:01:05.740 ********** 2026-04-13 00:48:00.700403 | orchestrator | =============================================================================== 2026-04-13 00:48:00.700424 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.35s 2026-04-13 00:48:00.700445 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.90s 2026-04-13 00:48:00.700465 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.54s 2026-04-13 00:48:00.700484 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.34s 2026-04-13 00:48:00.700502 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.96s 2026-04-13 00:48:00.700521 | orchestrator | 2026-04-13 00:48:00 | INFO  | Task 87e14b99-69e1-4bbe-bfa1-1f1158bbe057 is in state SUCCESS 2026-04-13 00:48:00.700542 | orchestrator | 2026-04-13 00:48:00 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:00.701749 | orchestrator | 2026-04-13 00:48:00 | INFO  | Task 11e6a288-971e-4715-9215-b07798a8aca3 is in state SUCCESS 2026-04-13 00:48:00.701821 | orchestrator | 2026-04-13 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:00.701831 | orchestrator | 2026-04-13 00:48:00.701839 | orchestrator | 2026-04-13 00:48:00.701847 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:48:00.701854 | orchestrator | 2026-04-13 00:48:00.701861 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:48:00.701876 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:00.742) 0:00:00.742 ********** 2026-04-13 00:48:00.701883 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-13 00:48:00.701891 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-13 00:48:00.701897 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-13 00:48:00.701904 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-13 00:48:00.701911 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-13 00:48:00.701918 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-13 00:48:00.701925 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-13 00:48:00.701931 | orchestrator | 2026-04-13 00:48:00.701938 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-13 00:48:00.701945 | orchestrator | 2026-04-13 00:48:00.701952 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-13 00:48:00.701959 | orchestrator | Monday 13 April 2026 00:46:36 +0000 (0:00:01.967) 0:00:02.710 ********** 2026-04-13 00:48:00.701975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:48:00.701984 | orchestrator | 2026-04-13 00:48:00.701991 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-13 00:48:00.701998 | orchestrator | Monday 13 April 2026 00:46:37 +0000 (0:00:01.484) 0:00:04.194 ********** 2026-04-13 00:48:00.702005 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:00.702013 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.702066 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:00.702073 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:00.702080 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:48:00.702087 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:48:00.702094 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:48:00.702101 | orchestrator | 2026-04-13 00:48:00.702108 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-13 00:48:00.702115 | orchestrator | Monday 13 April 2026 00:46:40 +0000 (0:00:02.879) 0:00:07.073 ********** 2026-04-13 00:48:00.702122 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:00.702129 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.702136 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:00.702143 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:00.702150 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:48:00.702156 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:48:00.702163 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:48:00.702170 | orchestrator | 2026-04-13 00:48:00.702177 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-13 00:48:00.702184 | orchestrator | Monday 13 April 2026 00:46:45 +0000 (0:00:04.801) 0:00:11.875 ********** 2026-04-13 00:48:00.702191 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:48:00.702198 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:48:00.702205 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:48:00.702212 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:48:00.702219 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:48:00.702226 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:48:00.702233 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.702240 | orchestrator | 2026-04-13 00:48:00.702252 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-13 00:48:00.702259 | orchestrator | Monday 13 April 2026 00:46:47 +0000 (0:00:02.249) 0:00:14.124 ********** 2026-04-13 00:48:00.702266 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:48:00.702273 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:48:00.702280 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:48:00.702286 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:48:00.702293 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:48:00.702300 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:48:00.702307 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.702314 | orchestrator | 2026-04-13 00:48:00.702321 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-13 00:48:00.702328 | orchestrator | Monday 13 April 2026 00:46:57 +0000 (0:00:10.302) 0:00:24.427 ********** 2026-04-13 00:48:00.702335 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:48:00.702342 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:48:00.702375 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:48:00.702383 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:48:00.702391 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:48:00.702399 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:48:00.702407 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.702415 | orchestrator | 2026-04-13 00:48:00.702422 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-13 00:48:00.702431 | orchestrator | Monday 13 April 2026 00:47:26 +0000 (0:00:28.176) 0:00:52.603 ********** 2026-04-13 00:48:00.702439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:48:00.702449 | orchestrator | 2026-04-13 00:48:00.702456 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-13 00:48:00.702464 | orchestrator | Monday 13 April 2026 00:47:28 +0000 (0:00:02.514) 0:00:55.118 ********** 2026-04-13 00:48:00.702472 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-13 00:48:00.702493 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-13 00:48:00.702501 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-13 00:48:00.702509 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-13 00:48:00.702517 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-13 00:48:00.702525 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-13 00:48:00.702533 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-13 00:48:00.702541 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-13 00:48:00.702549 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-13 00:48:00.702556 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-13 00:48:00.702564 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-13 00:48:00.702572 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-13 00:48:00.702580 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-13 00:48:00.702588 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-13 00:48:00.702596 | orchestrator | 2026-04-13 00:48:00.702604 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-13 00:48:00.702612 | orchestrator | Monday 13 April 2026 00:47:34 +0000 (0:00:05.776) 0:01:00.895 ********** 2026-04-13 00:48:00.702620 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.702628 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:00.702637 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:00.702645 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:48:00.702652 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:00.702660 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:48:00.702668 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:48:00.702680 | orchestrator | 2026-04-13 00:48:00.702689 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-13 00:48:00.702697 | orchestrator | Monday 13 April 2026 00:47:35 +0000 (0:00:01.302) 0:01:02.198 ********** 2026-04-13 00:48:00.702705 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.702713 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:48:00.702721 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:48:00.702729 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:48:00.702737 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:48:00.702745 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:48:00.702752 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:48:00.702759 | orchestrator | 2026-04-13 00:48:00.702766 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-13 00:48:00.702773 | orchestrator | Monday 13 April 2026 00:47:36 +0000 (0:00:01.144) 0:01:03.342 ********** 2026-04-13 00:48:00.702780 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:00.702787 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:00.702794 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.702801 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:00.702808 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:48:00.702815 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:48:00.702822 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:48:00.702861 | orchestrator | 2026-04-13 00:48:00.702869 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-13 00:48:00.702876 | orchestrator | Monday 13 April 2026 00:47:38 +0000 (0:00:01.671) 0:01:05.014 ********** 2026-04-13 00:48:00.702883 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:00.702890 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:48:00.702897 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:00.702904 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:00.702911 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:00.702918 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:48:00.702924 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:48:00.702931 | orchestrator | 2026-04-13 00:48:00.702939 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-13 00:48:00.702946 | orchestrator | Monday 13 April 2026 00:47:41 +0000 (0:00:02.837) 0:01:07.852 ********** 2026-04-13 00:48:00.702953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-13 00:48:00.702961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:48:00.702969 | orchestrator | 2026-04-13 00:48:00.702976 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-13 00:48:00.702983 | orchestrator | Monday 13 April 2026 00:47:42 +0000 (0:00:01.560) 0:01:09.412 ********** 2026-04-13 00:48:00.702990 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.702997 | orchestrator | 2026-04-13 00:48:00.703004 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-13 00:48:00.703011 | orchestrator | Monday 13 April 2026 00:47:45 +0000 (0:00:02.380) 0:01:11.793 ********** 2026-04-13 00:48:00.703018 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:48:00.703025 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:48:00.703032 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:48:00.703039 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:48:00.703046 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:48:00.703053 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:48:00.703060 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:00.703067 | orchestrator | 2026-04-13 00:48:00.703074 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:00.703081 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.703089 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.703101 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.703113 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.703121 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.703131 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.703138 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:00.703145 | orchestrator | 2026-04-13 00:48:00.703153 | orchestrator | 2026-04-13 00:48:00.703160 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:00.703167 | orchestrator | Monday 13 April 2026 00:47:57 +0000 (0:00:12.316) 0:01:24.109 ********** 2026-04-13 00:48:00.703174 | orchestrator | =============================================================================== 2026-04-13 00:48:00.703181 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 28.18s 2026-04-13 00:48:00.703189 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 12.32s 2026-04-13 00:48:00.703196 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.30s 2026-04-13 00:48:00.703203 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.78s 2026-04-13 00:48:00.703210 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.80s 2026-04-13 00:48:00.703217 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.88s 2026-04-13 00:48:00.703224 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.84s 2026-04-13 00:48:00.703231 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.51s 2026-04-13 00:48:00.703238 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.38s 2026-04-13 00:48:00.703245 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.25s 2026-04-13 00:48:00.703252 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.97s 2026-04-13 00:48:00.703259 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.67s 2026-04-13 00:48:00.703266 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.56s 2026-04-13 00:48:00.703273 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.48s 2026-04-13 00:48:00.703280 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.30s 2026-04-13 00:48:00.703287 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.14s 2026-04-13 00:48:03.762806 | orchestrator | 2026-04-13 00:48:03 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:03.766142 | orchestrator | 2026-04-13 00:48:03 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:03.770210 | orchestrator | 2026-04-13 00:48:03 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:03.772780 | orchestrator | 2026-04-13 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:06.817766 | orchestrator | 2026-04-13 00:48:06 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:06.819966 | orchestrator | 2026-04-13 00:48:06 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:06.821201 | orchestrator | 2026-04-13 00:48:06 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:06.821487 | orchestrator | 2026-04-13 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:09.862271 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:09.862463 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:09.863606 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:09.863757 | orchestrator | 2026-04-13 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:12.909607 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:12.910503 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:12.910543 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:12.910557 | orchestrator | 2026-04-13 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:15.947410 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:15.948811 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:15.949671 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:15.950939 | orchestrator | 2026-04-13 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:19.010832 | orchestrator | 2026-04-13 00:48:19 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:19.011597 | orchestrator | 2026-04-13 00:48:19 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:19.015287 | orchestrator | 2026-04-13 00:48:19 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:19.015467 | orchestrator | 2026-04-13 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:22.068996 | orchestrator | 2026-04-13 00:48:22 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:22.070696 | orchestrator | 2026-04-13 00:48:22 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:22.070877 | orchestrator | 2026-04-13 00:48:22 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:22.071308 | orchestrator | 2026-04-13 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:25.109931 | orchestrator | 2026-04-13 00:48:25 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:25.111557 | orchestrator | 2026-04-13 00:48:25 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:25.114452 | orchestrator | 2026-04-13 00:48:25 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:25.116009 | orchestrator | 2026-04-13 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:28.170920 | orchestrator | 2026-04-13 00:48:28 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:28.174608 | orchestrator | 2026-04-13 00:48:28 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:28.177546 | orchestrator | 2026-04-13 00:48:28 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:28.177633 | orchestrator | 2026-04-13 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:31.230287 | orchestrator | 2026-04-13 00:48:31 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:31.234514 | orchestrator | 2026-04-13 00:48:31 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:31.237291 | orchestrator | 2026-04-13 00:48:31 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:31.237339 | orchestrator | 2026-04-13 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:34.286885 | orchestrator | 2026-04-13 00:48:34 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:34.288863 | orchestrator | 2026-04-13 00:48:34 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:34.290638 | orchestrator | 2026-04-13 00:48:34 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:34.291355 | orchestrator | 2026-04-13 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:37.342732 | orchestrator | 2026-04-13 00:48:37 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:37.346234 | orchestrator | 2026-04-13 00:48:37 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:37.348570 | orchestrator | 2026-04-13 00:48:37 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:37.349415 | orchestrator | 2026-04-13 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:40.413889 | orchestrator | 2026-04-13 00:48:40 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:40.419360 | orchestrator | 2026-04-13 00:48:40 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:40.427729 | orchestrator | 2026-04-13 00:48:40 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:40.427801 | orchestrator | 2026-04-13 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:43.493068 | orchestrator | 2026-04-13 00:48:43 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:43.506730 | orchestrator | 2026-04-13 00:48:43 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:43.509491 | orchestrator | 2026-04-13 00:48:43 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:43.509544 | orchestrator | 2026-04-13 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:46.562987 | orchestrator | 2026-04-13 00:48:46 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:46.565009 | orchestrator | 2026-04-13 00:48:46 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:46.567961 | orchestrator | 2026-04-13 00:48:46 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:46.568001 | orchestrator | 2026-04-13 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:49.651372 | orchestrator | 2026-04-13 00:48:49 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:49.652760 | orchestrator | 2026-04-13 00:48:49 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:49.655025 | orchestrator | 2026-04-13 00:48:49 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:49.655091 | orchestrator | 2026-04-13 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:52.686227 | orchestrator | 2026-04-13 00:48:52 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:52.688113 | orchestrator | 2026-04-13 00:48:52 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:52.691057 | orchestrator | 2026-04-13 00:48:52 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:52.691164 | orchestrator | 2026-04-13 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:55.742828 | orchestrator | 2026-04-13 00:48:55 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:55.744837 | orchestrator | 2026-04-13 00:48:55 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:55.746530 | orchestrator | 2026-04-13 00:48:55 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:55.746590 | orchestrator | 2026-04-13 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:58.790699 | orchestrator | 2026-04-13 00:48:58 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:48:58.792717 | orchestrator | 2026-04-13 00:48:58 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:48:58.793725 | orchestrator | 2026-04-13 00:48:58 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:48:58.793804 | orchestrator | 2026-04-13 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:01.832162 | orchestrator | 2026-04-13 00:49:01 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:49:01.832894 | orchestrator | 2026-04-13 00:49:01 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:01.834645 | orchestrator | 2026-04-13 00:49:01 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:01.834667 | orchestrator | 2026-04-13 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:04.876244 | orchestrator | 2026-04-13 00:49:04 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:49:04.877797 | orchestrator | 2026-04-13 00:49:04 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:04.879333 | orchestrator | 2026-04-13 00:49:04 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:04.879648 | orchestrator | 2026-04-13 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:07.923195 | orchestrator | 2026-04-13 00:49:07 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:49:07.926268 | orchestrator | 2026-04-13 00:49:07 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:07.927794 | orchestrator | 2026-04-13 00:49:07 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:07.928752 | orchestrator | 2026-04-13 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:10.960105 | orchestrator | 2026-04-13 00:49:10 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:49:10.961103 | orchestrator | 2026-04-13 00:49:10 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:10.962396 | orchestrator | 2026-04-13 00:49:10 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:10.962500 | orchestrator | 2026-04-13 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:14.006197 | orchestrator | 2026-04-13 00:49:14 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:49:14.011305 | orchestrator | 2026-04-13 00:49:14 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:14.014912 | orchestrator | 2026-04-13 00:49:14 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:14.014993 | orchestrator | 2026-04-13 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:17.057982 | orchestrator | 2026-04-13 00:49:17 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:49:17.059329 | orchestrator | 2026-04-13 00:49:17 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:17.060029 | orchestrator | 2026-04-13 00:49:17 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:17.060068 | orchestrator | 2026-04-13 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:20.114549 | orchestrator | 2026-04-13 00:49:20 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state STARTED 2026-04-13 00:49:20.116277 | orchestrator | 2026-04-13 00:49:20 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:20.118059 | orchestrator | 2026-04-13 00:49:20 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:20.118108 | orchestrator | 2026-04-13 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:23.178941 | orchestrator | 2026-04-13 00:49:23 | INFO  | Task ff1ac344-1ade-423e-8d43-18312d543064 is in state SUCCESS 2026-04-13 00:49:23.180599 | orchestrator | 2026-04-13 00:49:23.180672 | orchestrator | 2026-04-13 00:49:23.180688 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-13 00:49:23.180703 | orchestrator | 2026-04-13 00:49:23.180819 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-13 00:49:23.180840 | orchestrator | Monday 13 April 2026 00:46:27 +0000 (0:00:00.369) 0:00:00.369 ********** 2026-04-13 00:49:23.180855 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:49:23.180870 | orchestrator | 2026-04-13 00:49:23.180884 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-13 00:49:23.180899 | orchestrator | Monday 13 April 2026 00:46:28 +0000 (0:00:01.274) 0:00:01.644 ********** 2026-04-13 00:49:23.180913 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:49:23.180928 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:49:23.180943 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:49:23.180957 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:49:23.180972 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:49:23.180989 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:49:23.181002 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:49:23.181015 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:49:23.181029 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:49:23.181043 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:49:23.181057 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:49:23.181072 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:49:23.181112 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:49:23.181127 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:49:23.181142 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:49:23.181156 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:49:23.181171 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:49:23.181186 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:49:23.181200 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:49:23.181215 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:49:23.181230 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:49:23.181244 | orchestrator | 2026-04-13 00:49:23.181259 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-13 00:49:23.181274 | orchestrator | Monday 13 April 2026 00:46:33 +0000 (0:00:04.865) 0:00:06.509 ********** 2026-04-13 00:49:23.181305 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:49:23.181322 | orchestrator | 2026-04-13 00:49:23.181338 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-13 00:49:23.181354 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:01.521) 0:00:08.030 ********** 2026-04-13 00:49:23.181375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.181394 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.181438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.181522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.181540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.181569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.181698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.181737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181752 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181851 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.181863 | orchestrator | 2026-04-13 00:49:23.181875 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-13 00:49:23.181887 | orchestrator | Monday 13 April 2026 00:46:42 +0000 (0:00:07.090) 0:00:15.121 ********** 2026-04-13 00:49:23.181899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.181917 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.181929 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.181941 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:49:23.181953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.181972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.181985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.182007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.182989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183074 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:49:23.183087 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:49:23.183118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183166 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:49:23.183178 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:49:23.183190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183235 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:49:23.183247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183295 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:49:23.183302 | orchestrator | 2026-04-13 00:49:23.183310 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-13 00:49:23.183318 | orchestrator | Monday 13 April 2026 00:46:44 +0000 (0:00:02.403) 0:00:17.524 ********** 2026-04-13 00:49:23.183325 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183333 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183340 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183347 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:49:23.183358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183384 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:49:23.183401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183424 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:49:23.183431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183479 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:49:23.183491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183575 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:49:23.183587 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:49:23.183601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:49:23.183618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.183644 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:49:23.183652 | orchestrator | 2026-04-13 00:49:23.183660 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-13 00:49:23.183668 | orchestrator | Monday 13 April 2026 00:46:47 +0000 (0:00:02.858) 0:00:20.383 ********** 2026-04-13 00:49:23.183676 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:49:23.183684 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:49:23.183693 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:49:23.183700 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:49:23.183709 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:49:23.183721 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:49:23.183728 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:49:23.183735 | orchestrator | 2026-04-13 00:49:23.183742 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-13 00:49:23.183749 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:01.267) 0:00:21.651 ********** 2026-04-13 00:49:23.183756 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:49:23.183763 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:49:23.183770 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:49:23.183777 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:49:23.183784 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:49:23.183791 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:49:23.183797 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:49:23.183804 | orchestrator | 2026-04-13 00:49:23.183811 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-13 00:49:23.183818 | orchestrator | Monday 13 April 2026 00:46:49 +0000 (0:00:01.320) 0:00:22.972 ********** 2026-04-13 00:49:23.183826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.183833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.183841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.183853 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.183872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.183880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.183908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.183916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183944 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.183994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184017 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184032 | orchestrator | 2026-04-13 00:49:23.184039 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-13 00:49:23.184046 | orchestrator | Monday 13 April 2026 00:46:56 +0000 (0:00:06.409) 0:00:29.381 ********** 2026-04-13 00:49:23.184053 | orchestrator | [WARNING]: Skipped 2026-04-13 00:49:23.184061 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-13 00:49:23.184069 | orchestrator | to this access issue: 2026-04-13 00:49:23.184076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-13 00:49:23.184083 | orchestrator | directory 2026-04-13 00:49:23.184090 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:49:23.184098 | orchestrator | 2026-04-13 00:49:23.184105 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-13 00:49:23.184112 | orchestrator | Monday 13 April 2026 00:46:57 +0000 (0:00:01.228) 0:00:30.610 ********** 2026-04-13 00:49:23.184119 | orchestrator | [WARNING]: Skipped 2026-04-13 00:49:23.184126 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-13 00:49:23.184136 | orchestrator | to this access issue: 2026-04-13 00:49:23.184143 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-13 00:49:23.184150 | orchestrator | directory 2026-04-13 00:49:23.184157 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:49:23.184164 | orchestrator | 2026-04-13 00:49:23.184171 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-13 00:49:23.184178 | orchestrator | Monday 13 April 2026 00:46:58 +0000 (0:00:01.448) 0:00:32.058 ********** 2026-04-13 00:49:23.184185 | orchestrator | [WARNING]: Skipped 2026-04-13 00:49:23.184192 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-13 00:49:23.184199 | orchestrator | to this access issue: 2026-04-13 00:49:23.184206 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-13 00:49:23.184213 | orchestrator | directory 2026-04-13 00:49:23.184220 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:49:23.184228 | orchestrator | 2026-04-13 00:49:23.184240 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-13 00:49:23.184259 | orchestrator | Monday 13 April 2026 00:46:59 +0000 (0:00:01.005) 0:00:33.064 ********** 2026-04-13 00:49:23.184272 | orchestrator | [WARNING]: Skipped 2026-04-13 00:49:23.184284 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-13 00:49:23.184295 | orchestrator | to this access issue: 2026-04-13 00:49:23.184308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-13 00:49:23.184321 | orchestrator | directory 2026-04-13 00:49:23.184334 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:49:23.184347 | orchestrator | 2026-04-13 00:49:23.184359 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-13 00:49:23.184372 | orchestrator | Monday 13 April 2026 00:47:01 +0000 (0:00:01.227) 0:00:34.291 ********** 2026-04-13 00:49:23.184383 | orchestrator | changed: [testbed-manager] 2026-04-13 00:49:23.184394 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:23.184405 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:23.184417 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:23.184429 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:49:23.184441 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:49:23.184478 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:49:23.184489 | orchestrator | 2026-04-13 00:49:23.184496 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-13 00:49:23.184503 | orchestrator | Monday 13 April 2026 00:47:08 +0000 (0:00:07.738) 0:00:42.029 ********** 2026-04-13 00:49:23.184510 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:49:23.184518 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:49:23.184525 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:49:23.184532 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:49:23.184539 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:49:23.184546 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:49:23.184552 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:49:23.184559 | orchestrator | 2026-04-13 00:49:23.184566 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-13 00:49:23.184573 | orchestrator | Monday 13 April 2026 00:47:16 +0000 (0:00:07.478) 0:00:49.507 ********** 2026-04-13 00:49:23.184580 | orchestrator | changed: [testbed-manager] 2026-04-13 00:49:23.184596 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:23.184603 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:23.184610 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:23.184617 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:49:23.184624 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:49:23.184631 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:49:23.184638 | orchestrator | 2026-04-13 00:49:23.184645 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-13 00:49:23.184652 | orchestrator | Monday 13 April 2026 00:47:21 +0000 (0:00:04.711) 0:00:54.219 ********** 2026-04-13 00:49:23.184659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.184674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.184690 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184698 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.184706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.184713 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.184724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.184732 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.184739 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.184763 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184770 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184778 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.184785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.184792 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.184803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.184815 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184825 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184833 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.184841 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:49:23.184848 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.184855 | orchestrator | 2026-04-13 00:49:23.184862 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-13 00:49:23.184869 | orchestrator | Monday 13 April 2026 00:47:26 +0000 (0:00:05.177) 0:00:59.397 ********** 2026-04-13 00:49:23.184876 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:49:23.184883 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:49:23.184890 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:49:23.184897 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:49:23.184904 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:49:23.184911 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:49:23.184918 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:49:23.184925 | orchestrator | 2026-04-13 00:49:23.184932 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-13 00:49:23.184939 | orchestrator | Monday 13 April 2026 00:47:29 +0000 (0:00:03.379) 0:01:02.777 ********** 2026-04-13 00:49:23.184946 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:49:23.184960 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:49:23.184967 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:49:23.184975 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:49:23.184982 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:49:23.184988 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:49:23.184995 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:49:23.185002 | orchestrator | 2026-04-13 00:49:23.185009 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-13 00:49:23.185016 | orchestrator | Monday 13 April 2026 00:47:33 +0000 (0:00:03.929) 0:01:06.706 ********** 2026-04-13 00:49:23.185023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.185036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.185043 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.185051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.185058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.185065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.185088 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:49:23.185122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185147 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:49:23.185222 | orchestrator | 2026-04-13 00:49:23.185229 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-13 00:49:23.185236 | orchestrator | Monday 13 April 2026 00:47:36 +0000 (0:00:03.079) 0:01:09.786 ********** 2026-04-13 00:49:23.185243 | orchestrator | changed: [testbed-manager] 2026-04-13 00:49:23.185250 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:23.185257 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:23.185264 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:23.185272 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:49:23.185283 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:49:23.185294 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:49:23.185305 | orchestrator | 2026-04-13 00:49:23.185316 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-13 00:49:23.185327 | orchestrator | Monday 13 April 2026 00:47:38 +0000 (0:00:01.741) 0:01:11.527 ********** 2026-04-13 00:49:23.185338 | orchestrator | changed: [testbed-manager] 2026-04-13 00:49:23.185350 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:23.185360 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:23.185370 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:23.185382 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:49:23.185393 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:49:23.185405 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:49:23.185412 | orchestrator | 2026-04-13 00:49:23.185419 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:49:23.185426 | orchestrator | Monday 13 April 2026 00:47:39 +0000 (0:00:01.495) 0:01:13.023 ********** 2026-04-13 00:49:23.185433 | orchestrator | 2026-04-13 00:49:23.185439 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:49:23.185446 | orchestrator | Monday 13 April 2026 00:47:39 +0000 (0:00:00.074) 0:01:13.098 ********** 2026-04-13 00:49:23.185472 | orchestrator | 2026-04-13 00:49:23.185480 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:49:23.185487 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:00.075) 0:01:13.173 ********** 2026-04-13 00:49:23.185494 | orchestrator | 2026-04-13 00:49:23.185501 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:49:23.185508 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:00.077) 0:01:13.251 ********** 2026-04-13 00:49:23.185514 | orchestrator | 2026-04-13 00:49:23.185521 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:49:23.185528 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:00.076) 0:01:13.327 ********** 2026-04-13 00:49:23.185535 | orchestrator | 2026-04-13 00:49:23.185542 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:49:23.185552 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:00.080) 0:01:13.408 ********** 2026-04-13 00:49:23.185563 | orchestrator | 2026-04-13 00:49:23.185574 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:49:23.185586 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:00.075) 0:01:13.483 ********** 2026-04-13 00:49:23.185596 | orchestrator | 2026-04-13 00:49:23.185603 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-13 00:49:23.185615 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:00.093) 0:01:13.576 ********** 2026-04-13 00:49:23.185622 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:23.185629 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:23.185636 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:49:23.185643 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:23.185650 | orchestrator | changed: [testbed-manager] 2026-04-13 00:49:23.185657 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:49:23.185664 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:49:23.185671 | orchestrator | 2026-04-13 00:49:23.185684 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-13 00:49:23.185692 | orchestrator | Monday 13 April 2026 00:48:22 +0000 (0:00:42.015) 0:01:55.592 ********** 2026-04-13 00:49:23.185699 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:23.185706 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:23.185713 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:23.185719 | orchestrator | changed: [testbed-manager] 2026-04-13 00:49:23.185726 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:49:23.185733 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:49:23.185740 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:49:23.185747 | orchestrator | 2026-04-13 00:49:23.185754 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-13 00:49:23.185761 | orchestrator | Monday 13 April 2026 00:49:08 +0000 (0:00:45.698) 0:02:41.291 ********** 2026-04-13 00:49:23.185768 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:49:23.185780 | orchestrator | ok: [testbed-manager] 2026-04-13 00:49:23.185787 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:49:23.185794 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:49:23.185801 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:49:23.185808 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:49:23.185815 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:49:23.185822 | orchestrator | 2026-04-13 00:49:23.185829 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-13 00:49:23.185836 | orchestrator | Monday 13 April 2026 00:49:10 +0000 (0:00:02.133) 0:02:43.424 ********** 2026-04-13 00:49:23.185843 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:23.185850 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:23.185857 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:49:23.185864 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:49:23.185871 | orchestrator | changed: [testbed-manager] 2026-04-13 00:49:23.185878 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:23.185885 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:49:23.185892 | orchestrator | 2026-04-13 00:49:23.185899 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:49:23.185907 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 00:49:23.185915 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 00:49:23.185922 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 00:49:23.185929 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 00:49:23.185936 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 00:49:23.185943 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 00:49:23.185956 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 00:49:23.185963 | orchestrator | 2026-04-13 00:49:23.185970 | orchestrator | 2026-04-13 00:49:23.185977 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:49:23.185984 | orchestrator | Monday 13 April 2026 00:49:19 +0000 (0:00:09.525) 0:02:52.949 ********** 2026-04-13 00:49:23.185991 | orchestrator | =============================================================================== 2026-04-13 00:49:23.185999 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 45.70s 2026-04-13 00:49:23.186005 | orchestrator | common : Restart fluentd container ------------------------------------- 42.02s 2026-04-13 00:49:23.186051 | orchestrator | common : Restart cron container ----------------------------------------- 9.53s 2026-04-13 00:49:23.186060 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.74s 2026-04-13 00:49:23.186067 | orchestrator | common : Copying over cron logrotate config file ------------------------ 7.48s 2026-04-13 00:49:23.186074 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.09s 2026-04-13 00:49:23.186081 | orchestrator | common : Copying over config.json files for services -------------------- 6.41s 2026-04-13 00:49:23.186088 | orchestrator | common : Ensuring config directories have correct owner and permission --- 5.18s 2026-04-13 00:49:23.186095 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.87s 2026-04-13 00:49:23.186102 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.71s 2026-04-13 00:49:23.186109 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.93s 2026-04-13 00:49:23.186116 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.38s 2026-04-13 00:49:23.186123 | orchestrator | common : Check common containers ---------------------------------------- 3.08s 2026-04-13 00:49:23.186130 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.86s 2026-04-13 00:49:23.186142 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.40s 2026-04-13 00:49:23.186149 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.13s 2026-04-13 00:49:23.186156 | orchestrator | common : Creating log volume -------------------------------------------- 1.74s 2026-04-13 00:49:23.186163 | orchestrator | common : include_tasks -------------------------------------------------- 1.52s 2026-04-13 00:49:23.186170 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.50s 2026-04-13 00:49:23.186177 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.45s 2026-04-13 00:49:23.224260 | orchestrator | 2026-04-13 00:49:23 | INFO  | Task e23e51bb-de39-489c-a902-a79ba83519ba is in state STARTED 2026-04-13 00:49:23.224377 | orchestrator | 2026-04-13 00:49:23 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:23.224401 | orchestrator | 2026-04-13 00:49:23 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:23.224418 | orchestrator | 2026-04-13 00:49:23 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:23.224434 | orchestrator | 2026-04-13 00:49:23 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:23.224450 | orchestrator | 2026-04-13 00:49:23 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:23.224530 | orchestrator | 2026-04-13 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:26.243226 | orchestrator | 2026-04-13 00:49:26 | INFO  | Task e23e51bb-de39-489c-a902-a79ba83519ba is in state STARTED 2026-04-13 00:49:26.243913 | orchestrator | 2026-04-13 00:49:26 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:26.246279 | orchestrator | 2026-04-13 00:49:26 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:26.246710 | orchestrator | 2026-04-13 00:49:26 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:26.247671 | orchestrator | 2026-04-13 00:49:26 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:26.248792 | orchestrator | 2026-04-13 00:49:26 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:26.249346 | orchestrator | 2026-04-13 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:29.297200 | orchestrator | 2026-04-13 00:49:29 | INFO  | Task e23e51bb-de39-489c-a902-a79ba83519ba is in state STARTED 2026-04-13 00:49:29.297322 | orchestrator | 2026-04-13 00:49:29 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:29.297525 | orchestrator | 2026-04-13 00:49:29 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:29.298575 | orchestrator | 2026-04-13 00:49:29 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:29.299279 | orchestrator | 2026-04-13 00:49:29 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:29.300748 | orchestrator | 2026-04-13 00:49:29 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:29.300849 | orchestrator | 2026-04-13 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:32.450243 | orchestrator | 2026-04-13 00:49:32 | INFO  | Task e23e51bb-de39-489c-a902-a79ba83519ba is in state STARTED 2026-04-13 00:49:32.453025 | orchestrator | 2026-04-13 00:49:32 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:32.453779 | orchestrator | 2026-04-13 00:49:32 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:32.455364 | orchestrator | 2026-04-13 00:49:32 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:32.457047 | orchestrator | 2026-04-13 00:49:32 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:32.458575 | orchestrator | 2026-04-13 00:49:32 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:32.458611 | orchestrator | 2026-04-13 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:35.556161 | orchestrator | 2026-04-13 00:49:35 | INFO  | Task e23e51bb-de39-489c-a902-a79ba83519ba is in state STARTED 2026-04-13 00:49:35.556267 | orchestrator | 2026-04-13 00:49:35 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:35.556278 | orchestrator | 2026-04-13 00:49:35 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:35.556286 | orchestrator | 2026-04-13 00:49:35 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:35.556293 | orchestrator | 2026-04-13 00:49:35 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:35.556301 | orchestrator | 2026-04-13 00:49:35 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:35.556309 | orchestrator | 2026-04-13 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:38.621170 | orchestrator | 2026-04-13 00:49:38 | INFO  | Task e23e51bb-de39-489c-a902-a79ba83519ba is in state SUCCESS 2026-04-13 00:49:38.621279 | orchestrator | 2026-04-13 00:49:38 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:38.621293 | orchestrator | 2026-04-13 00:49:38 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:38.621305 | orchestrator | 2026-04-13 00:49:38 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:38.622344 | orchestrator | 2026-04-13 00:49:38 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:38.626770 | orchestrator | 2026-04-13 00:49:38 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:38.626871 | orchestrator | 2026-04-13 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:41.671350 | orchestrator | 2026-04-13 00:49:41 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:41.671515 | orchestrator | 2026-04-13 00:49:41 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:41.673703 | orchestrator | 2026-04-13 00:49:41 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:41.674521 | orchestrator | 2026-04-13 00:49:41 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:49:41.676106 | orchestrator | 2026-04-13 00:49:41 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:41.678270 | orchestrator | 2026-04-13 00:49:41 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:41.678345 | orchestrator | 2026-04-13 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:44.780690 | orchestrator | 2026-04-13 00:49:44 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:44.780749 | orchestrator | 2026-04-13 00:49:44 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:44.780757 | orchestrator | 2026-04-13 00:49:44 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:44.780765 | orchestrator | 2026-04-13 00:49:44 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:49:44.780784 | orchestrator | 2026-04-13 00:49:44 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:44.780791 | orchestrator | 2026-04-13 00:49:44 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:44.780798 | orchestrator | 2026-04-13 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:47.785901 | orchestrator | 2026-04-13 00:49:47 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:47.786075 | orchestrator | 2026-04-13 00:49:47 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:47.786099 | orchestrator | 2026-04-13 00:49:47 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:47.786111 | orchestrator | 2026-04-13 00:49:47 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:49:47.786136 | orchestrator | 2026-04-13 00:49:47 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:47.786147 | orchestrator | 2026-04-13 00:49:47 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:47.786160 | orchestrator | 2026-04-13 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:50.827043 | orchestrator | 2026-04-13 00:49:50 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:50.827504 | orchestrator | 2026-04-13 00:49:50 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state STARTED 2026-04-13 00:49:50.828324 | orchestrator | 2026-04-13 00:49:50 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:50.829102 | orchestrator | 2026-04-13 00:49:50 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:49:50.829975 | orchestrator | 2026-04-13 00:49:50 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:50.830718 | orchestrator | 2026-04-13 00:49:50 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:50.830926 | orchestrator | 2026-04-13 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:53.861309 | orchestrator | 2026-04-13 00:49:53 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:53.893651 | orchestrator | 2026-04-13 00:49:53.893773 | orchestrator | 2026-04-13 00:49:53.893798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:49:53.893817 | orchestrator | 2026-04-13 00:49:53.893834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:49:53.893851 | orchestrator | Monday 13 April 2026 00:49:25 +0000 (0:00:00.486) 0:00:00.486 ********** 2026-04-13 00:49:53.893868 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:49:53.893886 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:49:53.893902 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:49:53.893919 | orchestrator | 2026-04-13 00:49:53.893935 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:49:53.893967 | orchestrator | Monday 13 April 2026 00:49:25 +0000 (0:00:00.404) 0:00:00.892 ********** 2026-04-13 00:49:53.893998 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-13 00:49:53.894063 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-13 00:49:53.894084 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-13 00:49:53.894101 | orchestrator | 2026-04-13 00:49:53.894118 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-13 00:49:53.894135 | orchestrator | 2026-04-13 00:49:53.894151 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-13 00:49:53.894168 | orchestrator | Monday 13 April 2026 00:49:26 +0000 (0:00:00.646) 0:00:01.539 ********** 2026-04-13 00:49:53.894186 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:49:53.894204 | orchestrator | 2026-04-13 00:49:53.894222 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-13 00:49:53.894240 | orchestrator | Monday 13 April 2026 00:49:27 +0000 (0:00:00.695) 0:00:02.234 ********** 2026-04-13 00:49:53.894257 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-13 00:49:53.894275 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-13 00:49:53.894292 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-13 00:49:53.894309 | orchestrator | 2026-04-13 00:49:53.894326 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-13 00:49:53.894344 | orchestrator | Monday 13 April 2026 00:49:29 +0000 (0:00:02.069) 0:00:04.303 ********** 2026-04-13 00:49:53.894361 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-13 00:49:53.894378 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-13 00:49:53.894396 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-13 00:49:53.894413 | orchestrator | 2026-04-13 00:49:53.894430 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-13 00:49:53.894448 | orchestrator | Monday 13 April 2026 00:49:31 +0000 (0:00:01.927) 0:00:06.230 ********** 2026-04-13 00:49:53.894465 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:53.894503 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:53.894521 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:53.894538 | orchestrator | 2026-04-13 00:49:53.894555 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-13 00:49:53.894585 | orchestrator | Monday 13 April 2026 00:49:33 +0000 (0:00:02.572) 0:00:08.803 ********** 2026-04-13 00:49:53.894603 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:53.894620 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:53.894637 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:53.894653 | orchestrator | 2026-04-13 00:49:53.894670 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:49:53.894687 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:49:53.894705 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:49:53.894744 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:49:53.894761 | orchestrator | 2026-04-13 00:49:53.894777 | orchestrator | 2026-04-13 00:49:53.894793 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:49:53.894809 | orchestrator | Monday 13 April 2026 00:49:37 +0000 (0:00:03.694) 0:00:12.498 ********** 2026-04-13 00:49:53.894827 | orchestrator | =============================================================================== 2026-04-13 00:49:53.894845 | orchestrator | memcached : Restart memcached container --------------------------------- 3.69s 2026-04-13 00:49:53.894861 | orchestrator | memcached : Check memcached container ----------------------------------- 2.57s 2026-04-13 00:49:53.894875 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.07s 2026-04-13 00:49:53.894889 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.93s 2026-04-13 00:49:53.894904 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.70s 2026-04-13 00:49:53.894919 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-04-13 00:49:53.894935 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2026-04-13 00:49:53.894953 | orchestrator | 2026-04-13 00:49:53.894970 | orchestrator | 2026-04-13 00:49:53.894986 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:49:53.895003 | orchestrator | 2026-04-13 00:49:53.895019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:49:53.895036 | orchestrator | Monday 13 April 2026 00:49:25 +0000 (0:00:00.468) 0:00:00.468 ********** 2026-04-13 00:49:53.895053 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:49:53.895069 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:49:53.895086 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:49:53.895103 | orchestrator | 2026-04-13 00:49:53.895121 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:49:53.895162 | orchestrator | Monday 13 April 2026 00:49:26 +0000 (0:00:00.410) 0:00:00.878 ********** 2026-04-13 00:49:53.895181 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-13 00:49:53.895197 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-13 00:49:53.895214 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-13 00:49:53.895231 | orchestrator | 2026-04-13 00:49:53.895248 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-13 00:49:53.895267 | orchestrator | 2026-04-13 00:49:53.895285 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-13 00:49:53.895303 | orchestrator | Monday 13 April 2026 00:49:26 +0000 (0:00:00.477) 0:00:01.355 ********** 2026-04-13 00:49:53.895319 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:49:53.895337 | orchestrator | 2026-04-13 00:49:53.895355 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-13 00:49:53.895371 | orchestrator | Monday 13 April 2026 00:49:27 +0000 (0:00:01.249) 0:00:02.604 ********** 2026-04-13 00:49:53.895392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895546 | orchestrator | 2026-04-13 00:49:53.895556 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-13 00:49:53.895567 | orchestrator | Monday 13 April 2026 00:49:30 +0000 (0:00:02.508) 0:00:05.113 ********** 2026-04-13 00:49:53.895578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895664 | orchestrator | 2026-04-13 00:49:53.895674 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-13 00:49:53.895684 | orchestrator | Monday 13 April 2026 00:49:33 +0000 (0:00:03.204) 0:00:08.318 ********** 2026-04-13 00:49:53.895695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895767 | orchestrator | 2026-04-13 00:49:53.895782 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-13 00:49:53.895793 | orchestrator | Monday 13 April 2026 00:49:36 +0000 (0:00:03.450) 0:00:11.769 ********** 2026-04-13 00:49:53.895803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:49:53.895876 | orchestrator | 2026-04-13 00:49:53.895886 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-13 00:49:53.895896 | orchestrator | Monday 13 April 2026 00:49:39 +0000 (0:00:03.001) 0:00:14.771 ********** 2026-04-13 00:49:53.895906 | orchestrator | 2026-04-13 00:49:53.895917 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-13 00:49:53.895932 | orchestrator | Monday 13 April 2026 00:49:41 +0000 (0:00:01.334) 0:00:16.105 ********** 2026-04-13 00:49:53.895942 | orchestrator | 2026-04-13 00:49:53.895952 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-13 00:49:53.895962 | orchestrator | Monday 13 April 2026 00:49:41 +0000 (0:00:00.218) 0:00:16.324 ********** 2026-04-13 00:49:53.895972 | orchestrator | 2026-04-13 00:49:53.895982 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-13 00:49:53.895997 | orchestrator | Monday 13 April 2026 00:49:41 +0000 (0:00:00.348) 0:00:16.672 ********** 2026-04-13 00:49:53.896008 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:53.896017 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:53.896027 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:53.896037 | orchestrator | 2026-04-13 00:49:53.896050 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-13 00:49:53.896068 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:03.525) 0:00:20.198 ********** 2026-04-13 00:49:53.896085 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:49:53.896101 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:49:53.896118 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:49:53.896134 | orchestrator | 2026-04-13 00:49:53.896151 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:49:53.896169 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:49:53.896188 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:49:53.896206 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:49:53.896224 | orchestrator | 2026-04-13 00:49:53.896246 | orchestrator | 2026-04-13 00:49:53.896264 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:49:53.896282 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:05.295) 0:00:25.493 ********** 2026-04-13 00:49:53.896299 | orchestrator | =============================================================================== 2026-04-13 00:49:53.896314 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.30s 2026-04-13 00:49:53.896324 | orchestrator | redis : Restart redis container ----------------------------------------- 3.53s 2026-04-13 00:49:53.896334 | orchestrator | redis : Copying over redis config files --------------------------------- 3.45s 2026-04-13 00:49:53.896344 | orchestrator | redis : Copying over default config.json files -------------------------- 3.20s 2026-04-13 00:49:53.896354 | orchestrator | redis : Check redis containers ------------------------------------------ 3.00s 2026-04-13 00:49:53.896364 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.51s 2026-04-13 00:49:53.896379 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.90s 2026-04-13 00:49:53.896403 | orchestrator | redis : include_tasks --------------------------------------------------- 1.25s 2026-04-13 00:49:53.896421 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-04-13 00:49:53.896435 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2026-04-13 00:49:53.896446 | orchestrator | 2026-04-13 00:49:53 | INFO  | Task a0e7fb9e-a52d-499e-a60c-bca3b21c16d1 is in state SUCCESS 2026-04-13 00:49:53.896456 | orchestrator | 2026-04-13 00:49:53 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:53.896466 | orchestrator | 2026-04-13 00:49:53 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:49:53.896476 | orchestrator | 2026-04-13 00:49:53 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:53.896513 | orchestrator | 2026-04-13 00:49:53 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:53.896524 | orchestrator | 2026-04-13 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:57.119245 | orchestrator | 2026-04-13 00:49:57 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:49:57.121102 | orchestrator | 2026-04-13 00:49:57 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:49:57.121795 | orchestrator | 2026-04-13 00:49:57 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:49:57.122961 | orchestrator | 2026-04-13 00:49:57 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:49:57.124411 | orchestrator | 2026-04-13 00:49:57 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:49:57.124442 | orchestrator | 2026-04-13 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:00.162237 | orchestrator | 2026-04-13 00:50:00 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:00.163330 | orchestrator | 2026-04-13 00:50:00 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:00.165774 | orchestrator | 2026-04-13 00:50:00 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:00.169868 | orchestrator | 2026-04-13 00:50:00 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:00.170364 | orchestrator | 2026-04-13 00:50:00 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:00.170399 | orchestrator | 2026-04-13 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:03.244139 | orchestrator | 2026-04-13 00:50:03 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:03.244678 | orchestrator | 2026-04-13 00:50:03 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:03.245865 | orchestrator | 2026-04-13 00:50:03 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:03.247322 | orchestrator | 2026-04-13 00:50:03 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:03.248419 | orchestrator | 2026-04-13 00:50:03 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:03.248752 | orchestrator | 2026-04-13 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:06.290993 | orchestrator | 2026-04-13 00:50:06 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:06.291465 | orchestrator | 2026-04-13 00:50:06 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:06.293200 | orchestrator | 2026-04-13 00:50:06 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:06.294246 | orchestrator | 2026-04-13 00:50:06 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:06.295608 | orchestrator | 2026-04-13 00:50:06 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:06.295656 | orchestrator | 2026-04-13 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:09.336930 | orchestrator | 2026-04-13 00:50:09 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:09.337482 | orchestrator | 2026-04-13 00:50:09 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:09.338636 | orchestrator | 2026-04-13 00:50:09 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:09.339275 | orchestrator | 2026-04-13 00:50:09 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:09.341364 | orchestrator | 2026-04-13 00:50:09 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:09.341401 | orchestrator | 2026-04-13 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:12.426729 | orchestrator | 2026-04-13 00:50:12 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:12.429561 | orchestrator | 2026-04-13 00:50:12 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:12.429949 | orchestrator | 2026-04-13 00:50:12 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:12.430773 | orchestrator | 2026-04-13 00:50:12 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:12.431054 | orchestrator | 2026-04-13 00:50:12 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:12.431198 | orchestrator | 2026-04-13 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:15.466939 | orchestrator | 2026-04-13 00:50:15 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:15.468452 | orchestrator | 2026-04-13 00:50:15 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:15.470467 | orchestrator | 2026-04-13 00:50:15 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:15.474767 | orchestrator | 2026-04-13 00:50:15 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:15.476558 | orchestrator | 2026-04-13 00:50:15 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:15.476597 | orchestrator | 2026-04-13 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:18.556624 | orchestrator | 2026-04-13 00:50:18 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:18.557665 | orchestrator | 2026-04-13 00:50:18 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:18.559821 | orchestrator | 2026-04-13 00:50:18 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:18.562560 | orchestrator | 2026-04-13 00:50:18 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:18.563341 | orchestrator | 2026-04-13 00:50:18 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:18.563364 | orchestrator | 2026-04-13 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:21.599570 | orchestrator | 2026-04-13 00:50:21 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:21.602890 | orchestrator | 2026-04-13 00:50:21 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:21.604951 | orchestrator | 2026-04-13 00:50:21 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:21.606169 | orchestrator | 2026-04-13 00:50:21 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:21.607292 | orchestrator | 2026-04-13 00:50:21 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:21.607336 | orchestrator | 2026-04-13 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:24.646286 | orchestrator | 2026-04-13 00:50:24 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:24.646388 | orchestrator | 2026-04-13 00:50:24 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:24.646902 | orchestrator | 2026-04-13 00:50:24 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:24.648029 | orchestrator | 2026-04-13 00:50:24 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:24.649171 | orchestrator | 2026-04-13 00:50:24 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:24.649208 | orchestrator | 2026-04-13 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:27.676112 | orchestrator | 2026-04-13 00:50:27 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:27.677168 | orchestrator | 2026-04-13 00:50:27 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:27.678709 | orchestrator | 2026-04-13 00:50:27 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:27.680647 | orchestrator | 2026-04-13 00:50:27 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:27.682324 | orchestrator | 2026-04-13 00:50:27 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:27.683385 | orchestrator | 2026-04-13 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:30.795674 | orchestrator | 2026-04-13 00:50:30 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:30.817722 | orchestrator | 2026-04-13 00:50:30 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:30.819708 | orchestrator | 2026-04-13 00:50:30 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:30.819742 | orchestrator | 2026-04-13 00:50:30 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:30.820389 | orchestrator | 2026-04-13 00:50:30 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:30.820400 | orchestrator | 2026-04-13 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:33.862972 | orchestrator | 2026-04-13 00:50:33 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:33.864883 | orchestrator | 2026-04-13 00:50:33 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state STARTED 2026-04-13 00:50:33.865965 | orchestrator | 2026-04-13 00:50:33 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:33.869045 | orchestrator | 2026-04-13 00:50:33 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:33.870474 | orchestrator | 2026-04-13 00:50:33 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:33.870560 | orchestrator | 2026-04-13 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:36.905272 | orchestrator | 2026-04-13 00:50:36 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:36.905946 | orchestrator | 2026-04-13 00:50:36 | INFO  | Task 98acf52a-720c-44c5-a675-aaa84d603a8c is in state SUCCESS 2026-04-13 00:50:36.907784 | orchestrator | 2026-04-13 00:50:36.907889 | orchestrator | 2026-04-13 00:50:36.907903 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:50:36.907911 | orchestrator | 2026-04-13 00:50:36.907918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:50:36.907924 | orchestrator | Monday 13 April 2026 00:49:24 +0000 (0:00:00.443) 0:00:00.443 ********** 2026-04-13 00:50:36.907932 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:50:36.907941 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:50:36.907948 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:50:36.907956 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:50:36.907963 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:50:36.907970 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:50:36.907978 | orchestrator | 2026-04-13 00:50:36.907985 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:50:36.908010 | orchestrator | Monday 13 April 2026 00:49:25 +0000 (0:00:01.022) 0:00:01.465 ********** 2026-04-13 00:50:36.908017 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:50:36.908024 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:50:36.908031 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:50:36.908037 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:50:36.908043 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:50:36.908049 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:50:36.908055 | orchestrator | 2026-04-13 00:50:36.908061 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-13 00:50:36.908068 | orchestrator | 2026-04-13 00:50:36.908075 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-13 00:50:36.908083 | orchestrator | Monday 13 April 2026 00:49:27 +0000 (0:00:01.342) 0:00:02.807 ********** 2026-04-13 00:50:36.908091 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:50:36.908100 | orchestrator | 2026-04-13 00:50:36.908107 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-13 00:50:36.908114 | orchestrator | Monday 13 April 2026 00:49:29 +0000 (0:00:02.115) 0:00:04.923 ********** 2026-04-13 00:50:36.908121 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-13 00:50:36.908129 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-13 00:50:36.908136 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-13 00:50:36.908143 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-13 00:50:36.908150 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-13 00:50:36.908158 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-13 00:50:36.908165 | orchestrator | 2026-04-13 00:50:36.908179 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-13 00:50:36.908186 | orchestrator | Monday 13 April 2026 00:49:31 +0000 (0:00:02.551) 0:00:07.475 ********** 2026-04-13 00:50:36.908194 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-13 00:50:36.908201 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-13 00:50:36.908208 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-13 00:50:36.908215 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-13 00:50:36.908222 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-13 00:50:36.908230 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-13 00:50:36.908237 | orchestrator | 2026-04-13 00:50:36.908244 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-13 00:50:36.908252 | orchestrator | Monday 13 April 2026 00:49:33 +0000 (0:00:01.772) 0:00:09.247 ********** 2026-04-13 00:50:36.908259 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-13 00:50:36.908266 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:50:36.908275 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-13 00:50:36.908282 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:50:36.908289 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-13 00:50:36.908296 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:50:36.908303 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-13 00:50:36.908311 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:50:36.908318 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-13 00:50:36.908325 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:50:36.908332 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-13 00:50:36.908340 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:50:36.908355 | orchestrator | 2026-04-13 00:50:36.908363 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-13 00:50:36.908371 | orchestrator | Monday 13 April 2026 00:49:35 +0000 (0:00:02.098) 0:00:11.346 ********** 2026-04-13 00:50:36.908379 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:50:36.908387 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:50:36.908395 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:50:36.908403 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:50:36.908410 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:50:36.908418 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:50:36.908426 | orchestrator | 2026-04-13 00:50:36.908434 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-13 00:50:36.908442 | orchestrator | Monday 13 April 2026 00:49:36 +0000 (0:00:01.020) 0:00:12.366 ********** 2026-04-13 00:50:36.908469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908701 | orchestrator | 2026-04-13 00:50:36.908708 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-13 00:50:36.908714 | orchestrator | Monday 13 April 2026 00:49:40 +0000 (0:00:03.207) 0:00:15.574 ********** 2026-04-13 00:50:36.908720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908841 | orchestrator | 2026-04-13 00:50:36.908848 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-13 00:50:36.908856 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:05.602) 0:00:21.176 ********** 2026-04-13 00:50:36.908863 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:50:36.908871 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:50:36.908878 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:50:36.908885 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:50:36.908892 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:50:36.908900 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:50:36.908907 | orchestrator | 2026-04-13 00:50:36.908913 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-13 00:50:36.908919 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:01.898) 0:00:23.075 ********** 2026-04-13 00:50:36.908926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.908986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:50:36.909004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.909012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.909020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.909033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.909042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:50:36.909050 | orchestrator | 2026-04-13 00:50:36.909057 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:50:36.909065 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:03.212) 0:00:26.288 ********** 2026-04-13 00:50:36.909073 | orchestrator | 2026-04-13 00:50:36.909083 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:50:36.909090 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.368) 0:00:26.657 ********** 2026-04-13 00:50:36.909098 | orchestrator | 2026-04-13 00:50:36.909106 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:50:36.909113 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.356) 0:00:27.013 ********** 2026-04-13 00:50:36.909119 | orchestrator | 2026-04-13 00:50:36.909126 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:50:36.909133 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.173) 0:00:27.187 ********** 2026-04-13 00:50:36.909139 | orchestrator | 2026-04-13 00:50:36.909145 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:50:36.909153 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.299) 0:00:27.487 ********** 2026-04-13 00:50:36.909161 | orchestrator | 2026-04-13 00:50:36.909172 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:50:36.909179 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:00.146) 0:00:27.633 ********** 2026-04-13 00:50:36.909187 | orchestrator | 2026-04-13 00:50:36.909195 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-13 00:50:36.909202 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:00.294) 0:00:27.927 ********** 2026-04-13 00:50:36.909210 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:50:36.909218 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:50:36.909225 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:50:36.909233 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:50:36.909241 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:50:36.909249 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:50:36.909256 | orchestrator | 2026-04-13 00:50:36.909264 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-13 00:50:36.909272 | orchestrator | Monday 13 April 2026 00:49:58 +0000 (0:00:05.853) 0:00:33.780 ********** 2026-04-13 00:50:36.909280 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:50:36.909287 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:50:36.909294 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:50:36.909302 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:50:36.909310 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:50:36.909317 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:50:36.909325 | orchestrator | 2026-04-13 00:50:36.909332 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-13 00:50:36.909340 | orchestrator | Monday 13 April 2026 00:49:59 +0000 (0:00:01.591) 0:00:35.372 ********** 2026-04-13 00:50:36.909348 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:50:36.909355 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:50:36.909363 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:50:36.909370 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:50:36.909378 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:50:36.909385 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:50:36.909393 | orchestrator | 2026-04-13 00:50:36.909401 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-13 00:50:36.909409 | orchestrator | Monday 13 April 2026 00:50:09 +0000 (0:00:09.595) 0:00:44.967 ********** 2026-04-13 00:50:36.909416 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-13 00:50:36.909424 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-13 00:50:36.909432 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-13 00:50:36.909439 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-13 00:50:36.909447 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-13 00:50:36.909465 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-13 00:50:36.909473 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-13 00:50:36.909482 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-13 00:50:36.909490 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-13 00:50:36.909497 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-13 00:50:36.909523 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-13 00:50:36.909529 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-13 00:50:36.909534 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-13 00:50:36.909540 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-13 00:50:36.909546 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-13 00:50:36.909552 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-13 00:50:36.909557 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-13 00:50:36.909564 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-13 00:50:36.909569 | orchestrator | 2026-04-13 00:50:36.909575 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-13 00:50:36.909582 | orchestrator | Monday 13 April 2026 00:50:19 +0000 (0:00:10.169) 0:00:55.136 ********** 2026-04-13 00:50:36.909588 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-13 00:50:36.909594 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:50:36.909601 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-13 00:50:36.909607 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:50:36.909614 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-13 00:50:36.909620 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:50:36.909626 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-13 00:50:36.909633 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-13 00:50:36.909645 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-13 00:50:36.909652 | orchestrator | 2026-04-13 00:50:36.909658 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-13 00:50:36.909665 | orchestrator | Monday 13 April 2026 00:50:22 +0000 (0:00:02.730) 0:00:57.867 ********** 2026-04-13 00:50:36.909671 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-13 00:50:36.909677 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:50:36.909683 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-13 00:50:36.909690 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:50:36.909696 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-13 00:50:36.909703 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:50:36.909709 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-13 00:50:36.909716 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-13 00:50:36.909723 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-13 00:50:36.909730 | orchestrator | 2026-04-13 00:50:36.909737 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-13 00:50:36.909759 | orchestrator | Monday 13 April 2026 00:50:25 +0000 (0:00:03.570) 0:01:01.438 ********** 2026-04-13 00:50:36.909766 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:50:36.909772 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:50:36.909778 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:50:36.909784 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:50:36.909790 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:50:36.909796 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:50:36.909801 | orchestrator | 2026-04-13 00:50:36.909807 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:50:36.909813 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:50:36.909820 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:50:36.909827 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:50:36.909833 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 00:50:36.909839 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 00:50:36.909856 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 00:50:36.909863 | orchestrator | 2026-04-13 00:50:36.909868 | orchestrator | 2026-04-13 00:50:36.909875 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:50:36.909882 | orchestrator | Monday 13 April 2026 00:50:33 +0000 (0:00:07.300) 0:01:08.738 ********** 2026-04-13 00:50:36.909888 | orchestrator | =============================================================================== 2026-04-13 00:50:36.909894 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.90s 2026-04-13 00:50:36.909900 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.17s 2026-04-13 00:50:36.909906 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.85s 2026-04-13 00:50:36.909912 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.60s 2026-04-13 00:50:36.909919 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.57s 2026-04-13 00:50:36.909924 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.21s 2026-04-13 00:50:36.909930 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.21s 2026-04-13 00:50:36.909937 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.73s 2026-04-13 00:50:36.909943 | orchestrator | module-load : Load modules ---------------------------------------------- 2.55s 2026-04-13 00:50:36.909949 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.12s 2026-04-13 00:50:36.909956 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.10s 2026-04-13 00:50:36.909962 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.90s 2026-04-13 00:50:36.909968 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.77s 2026-04-13 00:50:36.909975 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.64s 2026-04-13 00:50:36.909981 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.59s 2026-04-13 00:50:36.909987 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2026-04-13 00:50:36.909993 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2026-04-13 00:50:36.910000 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.02s 2026-04-13 00:50:36.910184 | orchestrator | 2026-04-13 00:50:36 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:36.910211 | orchestrator | 2026-04-13 00:50:36 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:36.910219 | orchestrator | 2026-04-13 00:50:36 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:36.910824 | orchestrator | 2026-04-13 00:50:36 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:36.910877 | orchestrator | 2026-04-13 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:40.157430 | orchestrator | 2026-04-13 00:50:40 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:40.157990 | orchestrator | 2026-04-13 00:50:40 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:40.158742 | orchestrator | 2026-04-13 00:50:40 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:40.159670 | orchestrator | 2026-04-13 00:50:40 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:40.160560 | orchestrator | 2026-04-13 00:50:40 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:40.160796 | orchestrator | 2026-04-13 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:43.219638 | orchestrator | 2026-04-13 00:50:43 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:43.222841 | orchestrator | 2026-04-13 00:50:43 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:43.226084 | orchestrator | 2026-04-13 00:50:43 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:43.227089 | orchestrator | 2026-04-13 00:50:43 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:43.228171 | orchestrator | 2026-04-13 00:50:43 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:43.228227 | orchestrator | 2026-04-13 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:46.270660 | orchestrator | 2026-04-13 00:50:46 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:46.271067 | orchestrator | 2026-04-13 00:50:46 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:46.272030 | orchestrator | 2026-04-13 00:50:46 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:46.272859 | orchestrator | 2026-04-13 00:50:46 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:46.273627 | orchestrator | 2026-04-13 00:50:46 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:46.273667 | orchestrator | 2026-04-13 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:49.371342 | orchestrator | 2026-04-13 00:50:49 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:49.371431 | orchestrator | 2026-04-13 00:50:49 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:49.372025 | orchestrator | 2026-04-13 00:50:49 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:49.372596 | orchestrator | 2026-04-13 00:50:49 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:49.373167 | orchestrator | 2026-04-13 00:50:49 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:49.373196 | orchestrator | 2026-04-13 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:52.445219 | orchestrator | 2026-04-13 00:50:52 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:52.446256 | orchestrator | 2026-04-13 00:50:52 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:52.452217 | orchestrator | 2026-04-13 00:50:52 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:52.453115 | orchestrator | 2026-04-13 00:50:52 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:52.455977 | orchestrator | 2026-04-13 00:50:52 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:52.456046 | orchestrator | 2026-04-13 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:55.508563 | orchestrator | 2026-04-13 00:50:55 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:55.509151 | orchestrator | 2026-04-13 00:50:55 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:55.511678 | orchestrator | 2026-04-13 00:50:55 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:55.514259 | orchestrator | 2026-04-13 00:50:55 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:55.514531 | orchestrator | 2026-04-13 00:50:55 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:55.517049 | orchestrator | 2026-04-13 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:58.600950 | orchestrator | 2026-04-13 00:50:58 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:50:58.604548 | orchestrator | 2026-04-13 00:50:58 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:50:58.605358 | orchestrator | 2026-04-13 00:50:58 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:50:58.606822 | orchestrator | 2026-04-13 00:50:58 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:50:58.609683 | orchestrator | 2026-04-13 00:50:58 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:50:58.609742 | orchestrator | 2026-04-13 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:02.258393 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:02.258469 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:02.258480 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:02.258489 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:51:02.258498 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:02.258507 | orchestrator | 2026-04-13 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:05.397128 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:05.397186 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:05.397877 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:05.398572 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:51:05.399310 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:05.399321 | orchestrator | 2026-04-13 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:08.450943 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:08.451102 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:08.451115 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:08.451124 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:51:08.451133 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:08.451195 | orchestrator | 2026-04-13 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:11.539989 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:11.540112 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:11.540141 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:11.540164 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state STARTED 2026-04-13 00:51:11.540185 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:11.540206 | orchestrator | 2026-04-13 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:14.582170 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task e759b9f1-1696-4428-a41d-affdd9f96066 is in state STARTED 2026-04-13 00:51:14.584695 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task d1713e15-b410-474f-9642-eb95edee2be0 is in state STARTED 2026-04-13 00:51:14.585782 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:14.588263 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:14.590915 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:14.595043 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task 34dc3400-8c0f-450d-80bd-d6d11fc46ae8 is in state SUCCESS 2026-04-13 00:51:14.599101 | orchestrator | 2026-04-13 00:51:14.599157 | orchestrator | 2026-04-13 00:51:14.599171 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-13 00:51:14.599184 | orchestrator | 2026-04-13 00:51:14.599197 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-13 00:51:14.599210 | orchestrator | Monday 13 April 2026 00:46:27 +0000 (0:00:00.336) 0:00:00.336 ********** 2026-04-13 00:51:14.599222 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:14.599235 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:14.599247 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:14.599258 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.599270 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.599281 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.599293 | orchestrator | 2026-04-13 00:51:14.599304 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-13 00:51:14.599316 | orchestrator | Monday 13 April 2026 00:46:28 +0000 (0:00:00.592) 0:00:00.929 ********** 2026-04-13 00:51:14.599352 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.599365 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:14.599376 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.599388 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.599399 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.599410 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.599424 | orchestrator | 2026-04-13 00:51:14.599443 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-13 00:51:14.599462 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.736) 0:00:01.665 ********** 2026-04-13 00:51:14.599480 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.599497 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:14.599556 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.599575 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.599594 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.599614 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.599633 | orchestrator | 2026-04-13 00:51:14.599653 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-13 00:51:14.599666 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.579) 0:00:02.245 ********** 2026-04-13 00:51:14.599677 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.599688 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.599700 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.599713 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:14.599725 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.599738 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.599751 | orchestrator | 2026-04-13 00:51:14.599763 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-13 00:51:14.599776 | orchestrator | Monday 13 April 2026 00:46:32 +0000 (0:00:02.852) 0:00:05.098 ********** 2026-04-13 00:51:14.599789 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.599801 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:14.599813 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.599825 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.599837 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.599849 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.599862 | orchestrator | 2026-04-13 00:51:14.599874 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-13 00:51:14.599887 | orchestrator | Monday 13 April 2026 00:46:33 +0000 (0:00:00.960) 0:00:06.059 ********** 2026-04-13 00:51:14.599900 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.599912 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.599925 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:14.599936 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.599948 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.599960 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.599973 | orchestrator | 2026-04-13 00:51:14.599986 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-13 00:51:14.599999 | orchestrator | Monday 13 April 2026 00:46:35 +0000 (0:00:01.484) 0:00:07.543 ********** 2026-04-13 00:51:14.600011 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.600024 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:14.600036 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.600049 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.600061 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.600072 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.600083 | orchestrator | 2026-04-13 00:51:14.600095 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-13 00:51:14.600106 | orchestrator | Monday 13 April 2026 00:46:36 +0000 (0:00:01.023) 0:00:08.566 ********** 2026-04-13 00:51:14.600117 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.600128 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:14.600139 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.600150 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.600170 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.600181 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.600192 | orchestrator | 2026-04-13 00:51:14.600204 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-13 00:51:14.600215 | orchestrator | Monday 13 April 2026 00:46:37 +0000 (0:00:01.060) 0:00:09.627 ********** 2026-04-13 00:51:14.600226 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:14.600237 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:14.600248 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.600269 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:14.600281 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:14.600292 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:14.600303 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:14.600314 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:14.600325 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.600336 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:14.600362 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:14.600374 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.600386 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:14.600397 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:14.600408 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.600419 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:14.600438 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:14.600456 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.600473 | orchestrator | 2026-04-13 00:51:14.600492 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-13 00:51:14.600579 | orchestrator | Monday 13 April 2026 00:46:38 +0000 (0:00:01.077) 0:00:10.704 ********** 2026-04-13 00:51:14.600596 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.600607 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:14.600619 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.600630 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.600641 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.600652 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.600664 | orchestrator | 2026-04-13 00:51:14.600675 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-13 00:51:14.600688 | orchestrator | Monday 13 April 2026 00:46:40 +0000 (0:00:02.041) 0:00:12.745 ********** 2026-04-13 00:51:14.600699 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:14.600710 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:14.600722 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:14.600733 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.600744 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.600755 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.600766 | orchestrator | 2026-04-13 00:51:14.600778 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-13 00:51:14.600790 | orchestrator | Monday 13 April 2026 00:46:41 +0000 (0:00:00.767) 0:00:13.512 ********** 2026-04-13 00:51:14.600801 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.600812 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.600823 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.600835 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.600846 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.600860 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "dest": "/usr/local/bin/k3s", "elapsed": 5, "msg": "Connection failure: Remote end closed connection without response", "url": "https://github.com/k3s-io/k3s/releases/download/v1.34.1+k3s1/sha256sum-amd64.txt"} 2026-04-13 00:51:14.600883 | orchestrator | 2026-04-13 00:51:14.600895 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-13 00:51:14.600906 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:07.517) 0:00:21.030 ********** 2026-04-13 00:51:14.600918 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.600929 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.600940 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.600951 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.600963 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.600974 | orchestrator | 2026-04-13 00:51:14.600985 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-13 00:51:14.600997 | orchestrator | Monday 13 April 2026 00:46:49 +0000 (0:00:01.250) 0:00:22.280 ********** 2026-04-13 00:51:14.601008 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.601019 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.601030 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.601041 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.601053 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.601064 | orchestrator | 2026-04-13 00:51:14.601076 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-13 00:51:14.601089 | orchestrator | Monday 13 April 2026 00:46:52 +0000 (0:00:02.703) 0:00:24.984 ********** 2026-04-13 00:51:14.601099 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.601109 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.601119 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.601129 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.601139 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.601149 | orchestrator | 2026-04-13 00:51:14.601159 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-13 00:51:14.601169 | orchestrator | Monday 13 April 2026 00:46:53 +0000 (0:00:00.571) 0:00:25.556 ********** 2026-04-13 00:51:14.601179 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-13 00:51:14.601189 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-13 00:51:14.601199 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.601209 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-13 00:51:14.601219 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-13 00:51:14.601229 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.601239 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-13 00:51:14.601249 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-13 00:51:14.601265 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.601275 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-13 00:51:14.601285 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-13 00:51:14.601295 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.601305 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-13 00:51:14.601315 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-13 00:51:14.601325 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.601335 | orchestrator | 2026-04-13 00:51:14.601345 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-13 00:51:14.601363 | orchestrator | Monday 13 April 2026 00:46:54 +0000 (0:00:00.986) 0:00:26.542 ********** 2026-04-13 00:51:14.601373 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.601384 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.601394 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.601404 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.601420 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.601437 | orchestrator | 2026-04-13 00:51:14.601454 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-13 00:51:14.601471 | orchestrator | Monday 13 April 2026 00:46:54 +0000 (0:00:00.756) 0:00:27.298 ********** 2026-04-13 00:51:14.601487 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.601503 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.601540 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.601558 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.601575 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.601590 | orchestrator | 2026-04-13 00:51:14.601605 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-13 00:51:14.601620 | orchestrator | 2026-04-13 00:51:14.601637 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-13 00:51:14.601654 | orchestrator | Monday 13 April 2026 00:46:56 +0000 (0:00:01.546) 0:00:28.845 ********** 2026-04-13 00:51:14.601671 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.601687 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.601701 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.601711 | orchestrator | 2026-04-13 00:51:14.601721 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-13 00:51:14.601731 | orchestrator | Monday 13 April 2026 00:46:57 +0000 (0:00:01.188) 0:00:30.034 ********** 2026-04-13 00:51:14.601741 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.601751 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.601760 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.601770 | orchestrator | 2026-04-13 00:51:14.601780 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-13 00:51:14.601790 | orchestrator | Monday 13 April 2026 00:46:59 +0000 (0:00:01.666) 0:00:31.700 ********** 2026-04-13 00:51:14.601800 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.601810 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.601820 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.601829 | orchestrator | 2026-04-13 00:51:14.601839 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-13 00:51:14.601849 | orchestrator | Monday 13 April 2026 00:47:00 +0000 (0:00:01.030) 0:00:32.731 ********** 2026-04-13 00:51:14.601859 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.601869 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.601879 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.601888 | orchestrator | 2026-04-13 00:51:14.601898 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-13 00:51:14.601908 | orchestrator | Monday 13 April 2026 00:47:01 +0000 (0:00:00.863) 0:00:33.594 ********** 2026-04-13 00:51:14.601918 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.601928 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.601938 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.601949 | orchestrator | 2026-04-13 00:51:14.601965 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-13 00:51:14.601991 | orchestrator | Monday 13 April 2026 00:47:01 +0000 (0:00:00.603) 0:00:34.197 ********** 2026-04-13 00:51:14.602007 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.602092 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.602110 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.602127 | orchestrator | 2026-04-13 00:51:14.602140 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-13 00:51:14.602150 | orchestrator | Monday 13 April 2026 00:47:03 +0000 (0:00:02.149) 0:00:36.347 ********** 2026-04-13 00:51:14.602160 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.602170 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.602179 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.602189 | orchestrator | 2026-04-13 00:51:14.602199 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-13 00:51:14.602209 | orchestrator | Monday 13 April 2026 00:47:06 +0000 (0:00:03.040) 0:00:39.387 ********** 2026-04-13 00:51:14.602229 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:51:14.602239 | orchestrator | 2026-04-13 00:51:14.602249 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-13 00:51:14.602259 | orchestrator | Monday 13 April 2026 00:47:07 +0000 (0:00:00.812) 0:00:40.200 ********** 2026-04-13 00:51:14.602269 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.602279 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.602289 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.602299 | orchestrator | 2026-04-13 00:51:14.602309 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-13 00:51:14.602319 | orchestrator | Monday 13 April 2026 00:47:12 +0000 (0:00:04.355) 0:00:44.555 ********** 2026-04-13 00:51:14.602329 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.602339 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.602348 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.602358 | orchestrator | 2026-04-13 00:51:14.602368 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-13 00:51:14.602378 | orchestrator | Monday 13 April 2026 00:47:13 +0000 (0:00:01.232) 0:00:45.787 ********** 2026-04-13 00:51:14.602388 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.602398 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.602414 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.602425 | orchestrator | 2026-04-13 00:51:14.602434 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-13 00:51:14.602445 | orchestrator | Monday 13 April 2026 00:47:14 +0000 (0:00:01.009) 0:00:46.796 ********** 2026-04-13 00:51:14.602454 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.602464 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.602474 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.602484 | orchestrator | 2026-04-13 00:51:14.602494 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-13 00:51:14.602504 | orchestrator | Monday 13 April 2026 00:47:16 +0000 (0:00:02.185) 0:00:48.982 ********** 2026-04-13 00:51:14.602538 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.602559 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.602569 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.602579 | orchestrator | 2026-04-13 00:51:14.602590 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-13 00:51:14.602600 | orchestrator | Monday 13 April 2026 00:47:17 +0000 (0:00:00.652) 0:00:49.634 ********** 2026-04-13 00:51:14.602610 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.602620 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.602630 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.602640 | orchestrator | 2026-04-13 00:51:14.602650 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-13 00:51:14.602660 | orchestrator | Monday 13 April 2026 00:47:18 +0000 (0:00:01.143) 0:00:50.778 ********** 2026-04-13 00:51:14.602670 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.602680 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.602690 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.602700 | orchestrator | 2026-04-13 00:51:14.602710 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-13 00:51:14.602721 | orchestrator | Monday 13 April 2026 00:47:21 +0000 (0:00:03.029) 0:00:53.808 ********** 2026-04-13 00:51:14.602731 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.602741 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.602751 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.602761 | orchestrator | 2026-04-13 00:51:14.602771 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-13 00:51:14.602782 | orchestrator | Monday 13 April 2026 00:47:24 +0000 (0:00:03.071) 0:00:56.879 ********** 2026-04-13 00:51:14.602792 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.602802 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.602823 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.602840 | orchestrator | 2026-04-13 00:51:14.602856 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-13 00:51:14.602872 | orchestrator | Monday 13 April 2026 00:47:25 +0000 (0:00:00.642) 0:00:57.521 ********** 2026-04-13 00:51:14.602889 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-13 00:51:14.602904 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-13 00:51:14.602920 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-13 00:51:14.602935 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-13 00:51:14.602952 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-13 00:51:14.602969 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-13 00:51:14.602986 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-13 00:51:14.603004 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-13 00:51:14.603021 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-13 00:51:14.603037 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-13 00:51:14.603054 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-13 00:51:14.603065 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-13 00:51:14.603075 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.603085 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.603095 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.603105 | orchestrator | 2026-04-13 00:51:14.603115 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-13 00:51:14.603125 | orchestrator | Monday 13 April 2026 00:48:08 +0000 (0:00:43.880) 0:01:41.401 ********** 2026-04-13 00:51:14.603135 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.603145 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.603155 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.603165 | orchestrator | 2026-04-13 00:51:14.603175 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-13 00:51:14.603210 | orchestrator | Monday 13 April 2026 00:48:09 +0000 (0:00:00.320) 0:01:41.722 ********** 2026-04-13 00:51:14.603241 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.603253 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.603263 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.603273 | orchestrator | 2026-04-13 00:51:14.603283 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-13 00:51:14.603293 | orchestrator | Monday 13 April 2026 00:48:10 +0000 (0:00:00.990) 0:01:42.712 ********** 2026-04-13 00:51:14.603303 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.603313 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.603323 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.603332 | orchestrator | 2026-04-13 00:51:14.603342 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-13 00:51:14.603369 | orchestrator | Monday 13 April 2026 00:48:11 +0000 (0:00:01.689) 0:01:44.401 ********** 2026-04-13 00:51:14.603380 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.603390 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.603400 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.603410 | orchestrator | 2026-04-13 00:51:14.603420 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-13 00:51:14.603432 | orchestrator | Monday 13 April 2026 00:48:38 +0000 (0:00:26.188) 0:02:10.590 ********** 2026-04-13 00:51:14.603450 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.603467 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.603485 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.603502 | orchestrator | 2026-04-13 00:51:14.603585 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-13 00:51:14.603604 | orchestrator | Monday 13 April 2026 00:48:38 +0000 (0:00:00.772) 0:02:11.362 ********** 2026-04-13 00:51:14.603623 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.603658 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.603696 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.603711 | orchestrator | 2026-04-13 00:51:14.603725 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-13 00:51:14.603738 | orchestrator | Monday 13 April 2026 00:48:39 +0000 (0:00:00.860) 0:02:12.222 ********** 2026-04-13 00:51:14.603752 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.603767 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.603780 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.603793 | orchestrator | 2026-04-13 00:51:14.603805 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-13 00:51:14.603825 | orchestrator | Monday 13 April 2026 00:48:40 +0000 (0:00:00.718) 0:02:12.941 ********** 2026-04-13 00:51:14.603834 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.603854 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.603872 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.603890 | orchestrator | 2026-04-13 00:51:14.603899 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-13 00:51:14.603907 | orchestrator | Monday 13 April 2026 00:48:41 +0000 (0:00:01.021) 0:02:13.962 ********** 2026-04-13 00:51:14.603927 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.603946 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.603965 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.603984 | orchestrator | 2026-04-13 00:51:14.603992 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-13 00:51:14.604022 | orchestrator | Monday 13 April 2026 00:48:42 +0000 (0:00:00.777) 0:02:14.740 ********** 2026-04-13 00:51:14.604041 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.604050 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.604058 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.604066 | orchestrator | 2026-04-13 00:51:14.604075 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-13 00:51:14.604083 | orchestrator | Monday 13 April 2026 00:48:42 +0000 (0:00:00.723) 0:02:15.463 ********** 2026-04-13 00:51:14.604091 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.604100 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.604118 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.604136 | orchestrator | 2026-04-13 00:51:14.604145 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-13 00:51:14.604153 | orchestrator | Monday 13 April 2026 00:48:43 +0000 (0:00:00.766) 0:02:16.229 ********** 2026-04-13 00:51:14.604161 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.604169 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.604177 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.604185 | orchestrator | 2026-04-13 00:51:14.604194 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-13 00:51:14.604202 | orchestrator | Monday 13 April 2026 00:48:44 +0000 (0:00:01.052) 0:02:17.281 ********** 2026-04-13 00:51:14.604218 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:14.604226 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:14.604234 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:14.604242 | orchestrator | 2026-04-13 00:51:14.604250 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-13 00:51:14.604259 | orchestrator | Monday 13 April 2026 00:48:46 +0000 (0:00:01.225) 0:02:18.507 ********** 2026-04-13 00:51:14.604267 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.604275 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.604283 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.604291 | orchestrator | 2026-04-13 00:51:14.604300 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-13 00:51:14.604308 | orchestrator | Monday 13 April 2026 00:48:46 +0000 (0:00:00.369) 0:02:18.876 ********** 2026-04-13 00:51:14.604316 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.604324 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.604332 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.604340 | orchestrator | 2026-04-13 00:51:14.604348 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-13 00:51:14.604357 | orchestrator | Monday 13 April 2026 00:48:46 +0000 (0:00:00.379) 0:02:19.256 ********** 2026-04-13 00:51:14.604365 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.604373 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.604381 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.604389 | orchestrator | 2026-04-13 00:51:14.604398 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-13 00:51:14.604406 | orchestrator | Monday 13 April 2026 00:48:47 +0000 (0:00:00.941) 0:02:20.198 ********** 2026-04-13 00:51:14.604414 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.604422 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.604440 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.604455 | orchestrator | 2026-04-13 00:51:14.604467 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-13 00:51:14.604480 | orchestrator | Monday 13 April 2026 00:48:49 +0000 (0:00:01.320) 0:02:21.518 ********** 2026-04-13 00:51:14.604493 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-13 00:51:14.604506 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-13 00:51:14.604555 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-13 00:51:14.604571 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-13 00:51:14.604586 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-13 00:51:14.604595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-13 00:51:14.604604 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-13 00:51:14.604613 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-13 00:51:14.604628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-13 00:51:14.604640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-13 00:51:14.604653 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-13 00:51:14.604664 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-13 00:51:14.604678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-13 00:51:14.604690 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-13 00:51:14.604703 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-13 00:51:14.604728 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-13 00:51:14.604738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-13 00:51:14.604746 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-13 00:51:14.604754 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-13 00:51:14.604763 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-13 00:51:14.604771 | orchestrator | 2026-04-13 00:51:14.604779 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-13 00:51:14.604787 | orchestrator | 2026-04-13 00:51:14.604795 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-13 00:51:14.604803 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:03.070) 0:02:24.589 ********** 2026-04-13 00:51:14.604811 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:14.604819 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:14.604827 | orchestrator | 2026-04-13 00:51:14.604835 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-13 00:51:14.604843 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:00.257) 0:02:24.847 ********** 2026-04-13 00:51:14.604851 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:14.604859 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:14.604867 | orchestrator | 2026-04-13 00:51:14.604876 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-13 00:51:14.604884 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:00.488) 0:02:25.335 ********** 2026-04-13 00:51:14.604896 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:14.604910 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:14.604924 | orchestrator | 2026-04-13 00:51:14.604938 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-13 00:51:14.604953 | orchestrator | Monday 13 April 2026 00:48:53 +0000 (0:00:00.407) 0:02:25.743 ********** 2026-04-13 00:51:14.604966 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-5 2026-04-13 00:51:14.604979 | orchestrator | 2026-04-13 00:51:14.604991 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-13 00:51:14.605000 | orchestrator | Monday 13 April 2026 00:48:53 +0000 (0:00:00.328) 0:02:26.071 ********** 2026-04-13 00:51:14.605014 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.605028 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.605040 | orchestrator | 2026-04-13 00:51:14.605054 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-13 00:51:14.605068 | orchestrator | Monday 13 April 2026 00:48:53 +0000 (0:00:00.285) 0:02:26.357 ********** 2026-04-13 00:51:14.605083 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.605096 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.605108 | orchestrator | 2026-04-13 00:51:14.605117 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-13 00:51:14.605125 | orchestrator | Monday 13 April 2026 00:48:54 +0000 (0:00:00.253) 0:02:26.610 ********** 2026-04-13 00:51:14.605133 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.605141 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.605149 | orchestrator | 2026-04-13 00:51:14.605157 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-13 00:51:14.605176 | orchestrator | Monday 13 April 2026 00:48:54 +0000 (0:00:00.227) 0:02:26.838 ********** 2026-04-13 00:51:14.605184 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.605192 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.605200 | orchestrator | 2026-04-13 00:51:14.605209 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-13 00:51:14.605217 | orchestrator | Monday 13 April 2026 00:48:55 +0000 (0:00:00.843) 0:02:27.682 ********** 2026-04-13 00:51:14.605234 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.605242 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.605250 | orchestrator | 2026-04-13 00:51:14.605258 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-13 00:51:14.605274 | orchestrator | Monday 13 April 2026 00:48:56 +0000 (0:00:01.186) 0:02:28.868 ********** 2026-04-13 00:51:14.605282 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.605290 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.605298 | orchestrator | 2026-04-13 00:51:14.605306 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-13 00:51:14.605315 | orchestrator | Monday 13 April 2026 00:48:57 +0000 (0:00:01.311) 0:02:30.180 ********** 2026-04-13 00:51:14.605323 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:14.605331 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:14.605339 | orchestrator | 2026-04-13 00:51:14.605347 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-13 00:51:14.605355 | orchestrator | 2026-04-13 00:51:14.605363 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-13 00:51:14.605371 | orchestrator | Monday 13 April 2026 00:49:07 +0000 (0:00:10.281) 0:02:40.462 ********** 2026-04-13 00:51:14.605379 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:14.605388 | orchestrator | 2026-04-13 00:51:14.605396 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-13 00:51:14.605404 | orchestrator | Monday 13 April 2026 00:49:08 +0000 (0:00:00.757) 0:02:41.219 ********** 2026-04-13 00:51:14.605412 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605420 | orchestrator | 2026-04-13 00:51:14.605428 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-13 00:51:14.605437 | orchestrator | Monday 13 April 2026 00:49:09 +0000 (0:00:00.548) 0:02:41.768 ********** 2026-04-13 00:51:14.605445 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-13 00:51:14.605453 | orchestrator | 2026-04-13 00:51:14.605461 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-13 00:51:14.605469 | orchestrator | Monday 13 April 2026 00:49:09 +0000 (0:00:00.497) 0:02:42.265 ********** 2026-04-13 00:51:14.605477 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605486 | orchestrator | 2026-04-13 00:51:14.605494 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-13 00:51:14.605502 | orchestrator | Monday 13 April 2026 00:49:10 +0000 (0:00:00.849) 0:02:43.115 ********** 2026-04-13 00:51:14.605531 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605542 | orchestrator | 2026-04-13 00:51:14.605551 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-13 00:51:14.605559 | orchestrator | Monday 13 April 2026 00:49:11 +0000 (0:00:00.660) 0:02:43.776 ********** 2026-04-13 00:51:14.605567 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:51:14.605575 | orchestrator | 2026-04-13 00:51:14.605583 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-13 00:51:14.605591 | orchestrator | Monday 13 April 2026 00:49:13 +0000 (0:00:01.792) 0:02:45.568 ********** 2026-04-13 00:51:14.605600 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:51:14.605608 | orchestrator | 2026-04-13 00:51:14.605616 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-13 00:51:14.605624 | orchestrator | Monday 13 April 2026 00:49:14 +0000 (0:00:01.032) 0:02:46.601 ********** 2026-04-13 00:51:14.605632 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605641 | orchestrator | 2026-04-13 00:51:14.605649 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-13 00:51:14.605657 | orchestrator | Monday 13 April 2026 00:49:14 +0000 (0:00:00.446) 0:02:47.048 ********** 2026-04-13 00:51:14.605665 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605673 | orchestrator | 2026-04-13 00:51:14.605681 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-13 00:51:14.605696 | orchestrator | 2026-04-13 00:51:14.605704 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-13 00:51:14.605712 | orchestrator | Monday 13 April 2026 00:49:15 +0000 (0:00:00.491) 0:02:47.539 ********** 2026-04-13 00:51:14.605720 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:14.605728 | orchestrator | 2026-04-13 00:51:14.605737 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-13 00:51:14.605745 | orchestrator | Monday 13 April 2026 00:49:15 +0000 (0:00:00.226) 0:02:47.766 ********** 2026-04-13 00:51:14.605753 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:51:14.605761 | orchestrator | 2026-04-13 00:51:14.605770 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-13 00:51:14.605778 | orchestrator | Monday 13 April 2026 00:49:15 +0000 (0:00:00.522) 0:02:48.288 ********** 2026-04-13 00:51:14.605786 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:14.605794 | orchestrator | 2026-04-13 00:51:14.605802 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-13 00:51:14.605810 | orchestrator | Monday 13 April 2026 00:49:16 +0000 (0:00:00.826) 0:02:49.115 ********** 2026-04-13 00:51:14.605818 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:14.605827 | orchestrator | 2026-04-13 00:51:14.605835 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-13 00:51:14.605843 | orchestrator | Monday 13 April 2026 00:49:18 +0000 (0:00:01.766) 0:02:50.882 ********** 2026-04-13 00:51:14.605851 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605860 | orchestrator | 2026-04-13 00:51:14.605868 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-13 00:51:14.605876 | orchestrator | Monday 13 April 2026 00:49:19 +0000 (0:00:00.848) 0:02:51.730 ********** 2026-04-13 00:51:14.605888 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:14.605897 | orchestrator | 2026-04-13 00:51:14.605906 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-13 00:51:14.605914 | orchestrator | Monday 13 April 2026 00:49:19 +0000 (0:00:00.514) 0:02:52.245 ********** 2026-04-13 00:51:14.605922 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605930 | orchestrator | 2026-04-13 00:51:14.605938 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-13 00:51:14.605946 | orchestrator | Monday 13 April 2026 00:49:29 +0000 (0:00:09.822) 0:03:02.067 ********** 2026-04-13 00:51:14.605955 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.605963 | orchestrator | 2026-04-13 00:51:14.605971 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-13 00:51:14.605985 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:15.992) 0:03:18.059 ********** 2026-04-13 00:51:14.605994 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:14.606002 | orchestrator | 2026-04-13 00:51:14.606011 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-13 00:51:14.606050 | orchestrator | 2026-04-13 00:51:14.606059 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-13 00:51:14.606068 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.947) 0:03:19.007 ********** 2026-04-13 00:51:14.606076 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.606084 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.606092 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.606101 | orchestrator | 2026-04-13 00:51:14.606109 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-13 00:51:14.606117 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.320) 0:03:19.327 ********** 2026-04-13 00:51:14.606125 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606133 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.606141 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.606149 | orchestrator | 2026-04-13 00:51:14.606158 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-13 00:51:14.606174 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:00.409) 0:03:19.736 ********** 2026-04-13 00:51:14.606182 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:51:14.606191 | orchestrator | 2026-04-13 00:51:14.606200 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-13 00:51:14.606208 | orchestrator | Monday 13 April 2026 00:49:48 +0000 (0:00:01.243) 0:03:20.980 ********** 2026-04-13 00:51:14.606216 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:14.606224 | orchestrator | 2026-04-13 00:51:14.606232 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-13 00:51:14.606241 | orchestrator | Monday 13 April 2026 00:49:49 +0000 (0:00:01.135) 0:03:22.116 ********** 2026-04-13 00:51:14.606250 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:51:14.606258 | orchestrator | 2026-04-13 00:51:14.606266 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-13 00:51:14.606275 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:01.189) 0:03:23.305 ********** 2026-04-13 00:51:14.606283 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606291 | orchestrator | 2026-04-13 00:51:14.606300 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-13 00:51:14.606308 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:00.082) 0:03:23.387 ********** 2026-04-13 00:51:14.606316 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:51:14.606324 | orchestrator | 2026-04-13 00:51:14.606333 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-13 00:51:14.606341 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.887) 0:03:24.274 ********** 2026-04-13 00:51:14.606350 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606358 | orchestrator | 2026-04-13 00:51:14.606366 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-13 00:51:14.606374 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.137) 0:03:24.412 ********** 2026-04-13 00:51:14.606383 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606391 | orchestrator | 2026-04-13 00:51:14.606399 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-13 00:51:14.606407 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:00.103) 0:03:24.516 ********** 2026-04-13 00:51:14.606416 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606424 | orchestrator | 2026-04-13 00:51:14.606433 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-13 00:51:14.606441 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:00.104) 0:03:24.620 ********** 2026-04-13 00:51:14.606450 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606458 | orchestrator | 2026-04-13 00:51:14.606466 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-13 00:51:14.606474 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:00.127) 0:03:24.748 ********** 2026-04-13 00:51:14.606482 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:14.606491 | orchestrator | 2026-04-13 00:51:14.606499 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-13 00:51:14.606507 | orchestrator | Monday 13 April 2026 00:49:57 +0000 (0:00:05.601) 0:03:30.349 ********** 2026-04-13 00:51:14.606569 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-13 00:51:14.606578 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-13 00:51:14.606587 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-13 00:51:14.606595 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-13 00:51:14.606603 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-13 00:51:14.606611 | orchestrator | 2026-04-13 00:51:14.606620 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-13 00:51:14.606640 | orchestrator | Monday 13 April 2026 00:50:41 +0000 (0:00:43.368) 0:04:13.719 ********** 2026-04-13 00:51:14.606649 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:51:14.606657 | orchestrator | 2026-04-13 00:51:14.606666 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-13 00:51:14.606674 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:01.498) 0:04:15.218 ********** 2026-04-13 00:51:14.606682 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:14.606690 | orchestrator | 2026-04-13 00:51:14.606699 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-13 00:51:14.606707 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:02.284) 0:04:17.502 ********** 2026-04-13 00:51:14.606715 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:14.606724 | orchestrator | 2026-04-13 00:51:14.606740 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-13 00:51:14.606749 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:01.209) 0:04:18.711 ********** 2026-04-13 00:51:14.606757 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606765 | orchestrator | 2026-04-13 00:51:14.606774 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-13 00:51:14.606782 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:00.144) 0:04:18.856 ********** 2026-04-13 00:51:14.606790 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-13 00:51:14.606798 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-13 00:51:14.606807 | orchestrator | 2026-04-13 00:51:14.606815 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-13 00:51:14.606823 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:02.292) 0:04:21.149 ********** 2026-04-13 00:51:14.606831 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.606839 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.606848 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.606856 | orchestrator | 2026-04-13 00:51:14.606864 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-13 00:51:14.606873 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:00.292) 0:04:21.441 ********** 2026-04-13 00:51:14.606881 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.606889 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.606897 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.606906 | orchestrator | 2026-04-13 00:51:14.606914 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-13 00:51:14.606922 | orchestrator | 2026-04-13 00:51:14.606930 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-13 00:51:14.606938 | orchestrator | Monday 13 April 2026 00:50:50 +0000 (0:00:01.124) 0:04:22.565 ********** 2026-04-13 00:51:14.606947 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:14.606955 | orchestrator | 2026-04-13 00:51:14.606963 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-13 00:51:14.606971 | orchestrator | Monday 13 April 2026 00:50:50 +0000 (0:00:00.277) 0:04:22.842 ********** 2026-04-13 00:51:14.606979 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:51:14.606988 | orchestrator | 2026-04-13 00:51:14.606996 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-13 00:51:14.607004 | orchestrator | Monday 13 April 2026 00:50:50 +0000 (0:00:00.221) 0:04:23.064 ********** 2026-04-13 00:51:14.607013 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:14.607021 | orchestrator | 2026-04-13 00:51:14.607029 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-13 00:51:14.607038 | orchestrator | 2026-04-13 00:51:14.607047 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-13 00:51:14.607055 | orchestrator | Monday 13 April 2026 00:50:56 +0000 (0:00:05.849) 0:04:28.913 ********** 2026-04-13 00:51:14.607069 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:14.607077 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:14.607086 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:14.607094 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:14.607100 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:14.607108 | orchestrator | 2026-04-13 00:51:14.607115 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-13 00:51:14.607122 | orchestrator | Monday 13 April 2026 00:50:57 +0000 (0:00:00.930) 0:04:29.844 ********** 2026-04-13 00:51:14.607129 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-13 00:51:14.607136 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-13 00:51:14.607142 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-13 00:51:14.607149 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-13 00:51:14.607156 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-13 00:51:14.607163 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-13 00:51:14.607170 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-13 00:51:14.607177 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-13 00:51:14.607184 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-13 00:51:14.607192 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-13 00:51:14.607204 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-13 00:51:14.607216 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-13 00:51:14.607229 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-13 00:51:14.607245 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-13 00:51:14.607256 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-13 00:51:14.607268 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-13 00:51:14.607281 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-13 00:51:14.607294 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-13 00:51:14.607314 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-13 00:51:14.607326 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-13 00:51:14.607333 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-13 00:51:14.607340 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-13 00:51:14.607347 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-13 00:51:14.607354 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-13 00:51:14.607361 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-13 00:51:14.607368 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-13 00:51:14.607375 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-13 00:51:14.607382 | orchestrator | 2026-04-13 00:51:14.607389 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-13 00:51:14.607396 | orchestrator | Monday 13 April 2026 00:51:11 +0000 (0:00:13.818) 0:04:43.662 ********** 2026-04-13 00:51:14.607403 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.607416 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.607423 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.607445 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.607452 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.607459 | orchestrator | 2026-04-13 00:51:14.607466 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-13 00:51:14.607473 | orchestrator | Monday 13 April 2026 00:51:11 +0000 (0:00:00.567) 0:04:44.229 ********** 2026-04-13 00:51:14.607480 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:14.607488 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:14.607495 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:14.607502 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:14.607509 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:14.607538 | orchestrator | 2026-04-13 00:51:14.607546 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:51:14.607553 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:51:14.607562 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-13 00:51:14.607569 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-13 00:51:14.607577 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-13 00:51:14.607584 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-13 00:51:14.607591 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:51:14.607600 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-13 00:51:14.607607 | orchestrator | 2026-04-13 00:51:14.607614 | orchestrator | 2026-04-13 00:51:14.607621 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:51:14.607628 | orchestrator | Monday 13 April 2026 00:51:12 +0000 (0:00:00.679) 0:04:44.909 ********** 2026-04-13 00:51:14.607635 | orchestrator | =============================================================================== 2026-04-13 00:51:14.607642 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.88s 2026-04-13 00:51:14.607649 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.37s 2026-04-13 00:51:14.607656 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.19s 2026-04-13 00:51:14.607663 | orchestrator | kubectl : Install required packages ------------------------------------ 15.99s 2026-04-13 00:51:14.607670 | orchestrator | Manage labels ---------------------------------------------------------- 13.82s 2026-04-13 00:51:14.607677 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.28s 2026-04-13 00:51:14.607684 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.82s 2026-04-13 00:51:14.607691 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.52s 2026-04-13 00:51:14.607698 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.85s 2026-04-13 00:51:14.607706 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.60s 2026-04-13 00:51:14.607713 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.36s 2026-04-13 00:51:14.607720 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.07s 2026-04-13 00:51:14.607733 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.07s 2026-04-13 00:51:14.607745 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 3.04s 2026-04-13 00:51:14.607752 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.03s 2026-04-13 00:51:14.607760 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.85s 2026-04-13 00:51:14.607767 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.70s 2026-04-13 00:51:14.607774 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.29s 2026-04-13 00:51:14.607782 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.28s 2026-04-13 00:51:14.607788 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.19s 2026-04-13 00:51:14.607796 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:14.607803 | orchestrator | 2026-04-13 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:17.651028 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task e759b9f1-1696-4428-a41d-affdd9f96066 is in state STARTED 2026-04-13 00:51:17.651158 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task d1713e15-b410-474f-9642-eb95edee2be0 is in state STARTED 2026-04-13 00:51:17.651722 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:17.653261 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:17.658789 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:17.661349 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:17.661934 | orchestrator | 2026-04-13 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:20.842971 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task e759b9f1-1696-4428-a41d-affdd9f96066 is in state STARTED 2026-04-13 00:51:20.843678 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task d1713e15-b410-474f-9642-eb95edee2be0 is in state STARTED 2026-04-13 00:51:20.844628 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:20.845840 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:20.847843 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:20.850318 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:20.850372 | orchestrator | 2026-04-13 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:23.942737 | orchestrator | 2026-04-13 00:51:23 | INFO  | Task e759b9f1-1696-4428-a41d-affdd9f96066 is in state SUCCESS 2026-04-13 00:51:23.943235 | orchestrator | 2026-04-13 00:51:23 | INFO  | Task d1713e15-b410-474f-9642-eb95edee2be0 is in state STARTED 2026-04-13 00:51:23.944439 | orchestrator | 2026-04-13 00:51:23 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:23.948187 | orchestrator | 2026-04-13 00:51:23 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:23.950779 | orchestrator | 2026-04-13 00:51:23 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:23.952594 | orchestrator | 2026-04-13 00:51:23 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:23.953103 | orchestrator | 2026-04-13 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:26.989998 | orchestrator | 2026-04-13 00:51:26 | INFO  | Task d1713e15-b410-474f-9642-eb95edee2be0 is in state SUCCESS 2026-04-13 00:51:26.992709 | orchestrator | 2026-04-13 00:51:26 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:26.993415 | orchestrator | 2026-04-13 00:51:26 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:26.994458 | orchestrator | 2026-04-13 00:51:26 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:26.995078 | orchestrator | 2026-04-13 00:51:26 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:26.995104 | orchestrator | 2026-04-13 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:30.035804 | orchestrator | 2026-04-13 00:51:30 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:30.041123 | orchestrator | 2026-04-13 00:51:30 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:30.042781 | orchestrator | 2026-04-13 00:51:30 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:30.045339 | orchestrator | 2026-04-13 00:51:30 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:30.045436 | orchestrator | 2026-04-13 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:33.098005 | orchestrator | 2026-04-13 00:51:33 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:33.100232 | orchestrator | 2026-04-13 00:51:33 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:33.101580 | orchestrator | 2026-04-13 00:51:33 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:33.103718 | orchestrator | 2026-04-13 00:51:33 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:33.103752 | orchestrator | 2026-04-13 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:36.145487 | orchestrator | 2026-04-13 00:51:36 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:36.146665 | orchestrator | 2026-04-13 00:51:36 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:36.148688 | orchestrator | 2026-04-13 00:51:36 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:36.152911 | orchestrator | 2026-04-13 00:51:36 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:36.152991 | orchestrator | 2026-04-13 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:39.198660 | orchestrator | 2026-04-13 00:51:39 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:39.199178 | orchestrator | 2026-04-13 00:51:39 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:39.199655 | orchestrator | 2026-04-13 00:51:39 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:39.200378 | orchestrator | 2026-04-13 00:51:39 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:39.200425 | orchestrator | 2026-04-13 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:42.242982 | orchestrator | 2026-04-13 00:51:42 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:42.243756 | orchestrator | 2026-04-13 00:51:42 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:42.245211 | orchestrator | 2026-04-13 00:51:42 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:42.245887 | orchestrator | 2026-04-13 00:51:42 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:42.246289 | orchestrator | 2026-04-13 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:45.290090 | orchestrator | 2026-04-13 00:51:45 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:45.292029 | orchestrator | 2026-04-13 00:51:45 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:45.293476 | orchestrator | 2026-04-13 00:51:45 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:45.294856 | orchestrator | 2026-04-13 00:51:45 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:45.294921 | orchestrator | 2026-04-13 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:48.338960 | orchestrator | 2026-04-13 00:51:48 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:48.343271 | orchestrator | 2026-04-13 00:51:48 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:48.345030 | orchestrator | 2026-04-13 00:51:48 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:48.346750 | orchestrator | 2026-04-13 00:51:48 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:48.346828 | orchestrator | 2026-04-13 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:51.393740 | orchestrator | 2026-04-13 00:51:51 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:51.395170 | orchestrator | 2026-04-13 00:51:51 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:51.397816 | orchestrator | 2026-04-13 00:51:51 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:51.400140 | orchestrator | 2026-04-13 00:51:51 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:51.400312 | orchestrator | 2026-04-13 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:54.443052 | orchestrator | 2026-04-13 00:51:54 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:54.446501 | orchestrator | 2026-04-13 00:51:54 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:54.450156 | orchestrator | 2026-04-13 00:51:54 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:54.453816 | orchestrator | 2026-04-13 00:51:54 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:54.453879 | orchestrator | 2026-04-13 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:57.494878 | orchestrator | 2026-04-13 00:51:57 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:51:57.496743 | orchestrator | 2026-04-13 00:51:57 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:51:57.499315 | orchestrator | 2026-04-13 00:51:57 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:51:57.501336 | orchestrator | 2026-04-13 00:51:57 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:51:57.501374 | orchestrator | 2026-04-13 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:00.550000 | orchestrator | 2026-04-13 00:52:00 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:00.550676 | orchestrator | 2026-04-13 00:52:00 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:52:00.551013 | orchestrator | 2026-04-13 00:52:00 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:00.551892 | orchestrator | 2026-04-13 00:52:00 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:00.551939 | orchestrator | 2026-04-13 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:03.584625 | orchestrator | 2026-04-13 00:52:03 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:03.585913 | orchestrator | 2026-04-13 00:52:03 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:52:03.587146 | orchestrator | 2026-04-13 00:52:03 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:03.588420 | orchestrator | 2026-04-13 00:52:03 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:03.588737 | orchestrator | 2026-04-13 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:06.635065 | orchestrator | 2026-04-13 00:52:06 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:06.635960 | orchestrator | 2026-04-13 00:52:06 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state STARTED 2026-04-13 00:52:06.636761 | orchestrator | 2026-04-13 00:52:06 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:06.637407 | orchestrator | 2026-04-13 00:52:06 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:06.637443 | orchestrator | 2026-04-13 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:09.672469 | orchestrator | 2026-04-13 00:52:09 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:09.673414 | orchestrator | 2026-04-13 00:52:09 | INFO  | Task 78fe2e3d-2c84-451e-bff3-839732edd070 is in state SUCCESS 2026-04-13 00:52:09.675097 | orchestrator | 2026-04-13 00:52:09.675142 | orchestrator | 2026-04-13 00:52:09.675158 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-13 00:52:09.675180 | orchestrator | 2026-04-13 00:52:09.675200 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-13 00:52:09.675220 | orchestrator | Monday 13 April 2026 00:51:17 +0000 (0:00:00.280) 0:00:00.280 ********** 2026-04-13 00:52:09.675235 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-13 00:52:09.675247 | orchestrator | 2026-04-13 00:52:09.675258 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-13 00:52:09.675270 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:01.280) 0:00:01.560 ********** 2026-04-13 00:52:09.675283 | orchestrator | changed: [testbed-manager] 2026-04-13 00:52:09.675296 | orchestrator | 2026-04-13 00:52:09.675313 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-13 00:52:09.675332 | orchestrator | Monday 13 April 2026 00:51:20 +0000 (0:00:01.782) 0:00:03.342 ********** 2026-04-13 00:52:09.675350 | orchestrator | changed: [testbed-manager] 2026-04-13 00:52:09.675369 | orchestrator | 2026-04-13 00:52:09.675388 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:52:09.675406 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:52:09.675427 | orchestrator | 2026-04-13 00:52:09.675446 | orchestrator | 2026-04-13 00:52:09.675467 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:52:09.675562 | orchestrator | Monday 13 April 2026 00:51:21 +0000 (0:00:00.600) 0:00:03.943 ********** 2026-04-13 00:52:09.675578 | orchestrator | =============================================================================== 2026-04-13 00:52:09.675590 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.78s 2026-04-13 00:52:09.675601 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.28s 2026-04-13 00:52:09.675612 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.60s 2026-04-13 00:52:09.675623 | orchestrator | 2026-04-13 00:52:09.675635 | orchestrator | 2026-04-13 00:52:09.675646 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-13 00:52:09.675680 | orchestrator | 2026-04-13 00:52:09.675692 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-13 00:52:09.675703 | orchestrator | Monday 13 April 2026 00:51:17 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-04-13 00:52:09.675730 | orchestrator | ok: [testbed-manager] 2026-04-13 00:52:09.675743 | orchestrator | 2026-04-13 00:52:09.675754 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-13 00:52:09.675765 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.968) 0:00:01.243 ********** 2026-04-13 00:52:09.675776 | orchestrator | ok: [testbed-manager] 2026-04-13 00:52:09.675788 | orchestrator | 2026-04-13 00:52:09.675799 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-13 00:52:09.675810 | orchestrator | Monday 13 April 2026 00:51:19 +0000 (0:00:00.750) 0:00:01.994 ********** 2026-04-13 00:52:09.675821 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-13 00:52:09.675833 | orchestrator | 2026-04-13 00:52:09.675844 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-13 00:52:09.675855 | orchestrator | Monday 13 April 2026 00:51:20 +0000 (0:00:01.514) 0:00:03.508 ********** 2026-04-13 00:52:09.675866 | orchestrator | changed: [testbed-manager] 2026-04-13 00:52:09.675878 | orchestrator | 2026-04-13 00:52:09.675898 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-13 00:52:09.675936 | orchestrator | Monday 13 April 2026 00:51:22 +0000 (0:00:01.469) 0:00:04.978 ********** 2026-04-13 00:52:09.675966 | orchestrator | changed: [testbed-manager] 2026-04-13 00:52:09.675984 | orchestrator | 2026-04-13 00:52:09.676003 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-13 00:52:09.676021 | orchestrator | Monday 13 April 2026 00:51:22 +0000 (0:00:00.598) 0:00:05.577 ********** 2026-04-13 00:52:09.676038 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:52:09.676054 | orchestrator | 2026-04-13 00:52:09.676072 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-13 00:52:09.676090 | orchestrator | Monday 13 April 2026 00:51:24 +0000 (0:00:02.178) 0:00:07.755 ********** 2026-04-13 00:52:09.676108 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:52:09.676128 | orchestrator | 2026-04-13 00:52:09.676146 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-13 00:52:09.676163 | orchestrator | Monday 13 April 2026 00:51:25 +0000 (0:00:00.739) 0:00:08.495 ********** 2026-04-13 00:52:09.676180 | orchestrator | ok: [testbed-manager] 2026-04-13 00:52:09.676197 | orchestrator | 2026-04-13 00:52:09.676215 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-13 00:52:09.676234 | orchestrator | Monday 13 April 2026 00:51:26 +0000 (0:00:00.374) 0:00:08.870 ********** 2026-04-13 00:52:09.676276 | orchestrator | ok: [testbed-manager] 2026-04-13 00:52:09.676310 | orchestrator | 2026-04-13 00:52:09.676322 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:52:09.676334 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:52:09.676346 | orchestrator | 2026-04-13 00:52:09.676357 | orchestrator | 2026-04-13 00:52:09.676369 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:52:09.676394 | orchestrator | Monday 13 April 2026 00:51:26 +0000 (0:00:00.285) 0:00:09.155 ********** 2026-04-13 00:52:09.676405 | orchestrator | =============================================================================== 2026-04-13 00:52:09.676417 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.18s 2026-04-13 00:52:09.676428 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.51s 2026-04-13 00:52:09.676439 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2026-04-13 00:52:09.676468 | orchestrator | Get home directory of operator user ------------------------------------- 0.97s 2026-04-13 00:52:09.676480 | orchestrator | Create .kube directory -------------------------------------------------- 0.75s 2026-04-13 00:52:09.676491 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2026-04-13 00:52:09.676502 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.60s 2026-04-13 00:52:09.676588 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2026-04-13 00:52:09.676608 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-04-13 00:52:09.676627 | orchestrator | 2026-04-13 00:52:09.676647 | orchestrator | 2026-04-13 00:52:09.676665 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-13 00:52:09.676680 | orchestrator | 2026-04-13 00:52:09.676692 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-13 00:52:09.676703 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:00.114) 0:00:00.114 ********** 2026-04-13 00:52:09.676714 | orchestrator | ok: [localhost] => { 2026-04-13 00:52:09.676726 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-13 00:52:09.676738 | orchestrator | } 2026-04-13 00:52:09.676750 | orchestrator | 2026-04-13 00:52:09.676761 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-13 00:52:09.676772 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:00.039) 0:00:00.153 ********** 2026-04-13 00:52:09.676785 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-13 00:52:09.676798 | orchestrator | ...ignoring 2026-04-13 00:52:09.676810 | orchestrator | 2026-04-13 00:52:09.676821 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-13 00:52:09.676832 | orchestrator | Monday 13 April 2026 00:49:49 +0000 (0:00:04.390) 0:00:04.544 ********** 2026-04-13 00:52:09.676844 | orchestrator | skipping: [localhost] 2026-04-13 00:52:09.676855 | orchestrator | 2026-04-13 00:52:09.676866 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-13 00:52:09.676877 | orchestrator | Monday 13 April 2026 00:49:49 +0000 (0:00:00.212) 0:00:04.757 ********** 2026-04-13 00:52:09.676888 | orchestrator | ok: [localhost] 2026-04-13 00:52:09.676900 | orchestrator | 2026-04-13 00:52:09.676919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:52:09.677119 | orchestrator | 2026-04-13 00:52:09.677134 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:52:09.677146 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:00.842) 0:00:05.599 ********** 2026-04-13 00:52:09.677158 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:52:09.677169 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:52:09.677180 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:52:09.677192 | orchestrator | 2026-04-13 00:52:09.677203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:52:09.677215 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:00.381) 0:00:05.981 ********** 2026-04-13 00:52:09.677226 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-13 00:52:09.677238 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-13 00:52:09.677249 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-13 00:52:09.677272 | orchestrator | 2026-04-13 00:52:09.677284 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-13 00:52:09.677295 | orchestrator | 2026-04-13 00:52:09.677307 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-13 00:52:09.677318 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.757) 0:00:06.738 ********** 2026-04-13 00:52:09.677330 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:52:09.677341 | orchestrator | 2026-04-13 00:52:09.677353 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-13 00:52:09.677364 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:01.196) 0:00:07.935 ********** 2026-04-13 00:52:09.677376 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:52:09.677387 | orchestrator | 2026-04-13 00:52:09.677399 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-13 00:52:09.677410 | orchestrator | Monday 13 April 2026 00:49:55 +0000 (0:00:02.355) 0:00:10.290 ********** 2026-04-13 00:52:09.677421 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.677433 | orchestrator | 2026-04-13 00:52:09.677444 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-13 00:52:09.677456 | orchestrator | Monday 13 April 2026 00:49:55 +0000 (0:00:00.473) 0:00:10.764 ********** 2026-04-13 00:52:09.677467 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.677479 | orchestrator | 2026-04-13 00:52:09.677490 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-13 00:52:09.677501 | orchestrator | Monday 13 April 2026 00:49:56 +0000 (0:00:00.531) 0:00:11.295 ********** 2026-04-13 00:52:09.677541 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.677553 | orchestrator | 2026-04-13 00:52:09.677565 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-13 00:52:09.677576 | orchestrator | Monday 13 April 2026 00:49:56 +0000 (0:00:00.426) 0:00:11.722 ********** 2026-04-13 00:52:09.677587 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.677599 | orchestrator | 2026-04-13 00:52:09.677610 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-13 00:52:09.677622 | orchestrator | Monday 13 April 2026 00:49:57 +0000 (0:00:00.423) 0:00:12.146 ********** 2026-04-13 00:52:09.677633 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:52:09.677644 | orchestrator | 2026-04-13 00:52:09.677656 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-13 00:52:09.677679 | orchestrator | Monday 13 April 2026 00:49:58 +0000 (0:00:01.115) 0:00:13.261 ********** 2026-04-13 00:52:09.677691 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:52:09.677702 | orchestrator | 2026-04-13 00:52:09.677714 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-13 00:52:09.677725 | orchestrator | Monday 13 April 2026 00:49:59 +0000 (0:00:01.478) 0:00:14.740 ********** 2026-04-13 00:52:09.677737 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.677748 | orchestrator | 2026-04-13 00:52:09.677759 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-13 00:52:09.677771 | orchestrator | Monday 13 April 2026 00:50:02 +0000 (0:00:02.636) 0:00:17.376 ********** 2026-04-13 00:52:09.677782 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.677793 | orchestrator | 2026-04-13 00:52:09.677805 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-13 00:52:09.677816 | orchestrator | Monday 13 April 2026 00:50:02 +0000 (0:00:00.476) 0:00:17.853 ********** 2026-04-13 00:52:09.677833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.677876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.677891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.677904 | orchestrator | 2026-04-13 00:52:09.677915 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-13 00:52:09.677927 | orchestrator | Monday 13 April 2026 00:50:04 +0000 (0:00:01.379) 0:00:19.232 ********** 2026-04-13 00:52:09.677948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.677974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.677987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.678000 | orchestrator | 2026-04-13 00:52:09.678011 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-13 00:52:09.678083 | orchestrator | Monday 13 April 2026 00:50:05 +0000 (0:00:01.757) 0:00:20.990 ********** 2026-04-13 00:52:09.678095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-13 00:52:09.678107 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-13 00:52:09.678118 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-13 00:52:09.678130 | orchestrator | 2026-04-13 00:52:09.678141 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-13 00:52:09.678153 | orchestrator | Monday 13 April 2026 00:50:07 +0000 (0:00:01.656) 0:00:22.647 ********** 2026-04-13 00:52:09.678164 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-13 00:52:09.678176 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-13 00:52:09.678187 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-13 00:52:09.678198 | orchestrator | 2026-04-13 00:52:09.678217 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-13 00:52:09.678246 | orchestrator | Monday 13 April 2026 00:50:12 +0000 (0:00:04.402) 0:00:27.050 ********** 2026-04-13 00:52:09.678264 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-13 00:52:09.678281 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-13 00:52:09.678309 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-13 00:52:09.678328 | orchestrator | 2026-04-13 00:52:09.678346 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-13 00:52:09.678363 | orchestrator | Monday 13 April 2026 00:50:13 +0000 (0:00:01.646) 0:00:28.696 ********** 2026-04-13 00:52:09.678382 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-13 00:52:09.678400 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-13 00:52:09.678419 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-13 00:52:09.678438 | orchestrator | 2026-04-13 00:52:09.678458 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-13 00:52:09.678477 | orchestrator | Monday 13 April 2026 00:50:15 +0000 (0:00:01.769) 0:00:30.465 ********** 2026-04-13 00:52:09.678492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-13 00:52:09.678503 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-13 00:52:09.678544 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-13 00:52:09.678556 | orchestrator | 2026-04-13 00:52:09.678568 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-13 00:52:09.678579 | orchestrator | Monday 13 April 2026 00:50:17 +0000 (0:00:01.734) 0:00:32.200 ********** 2026-04-13 00:52:09.678590 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-13 00:52:09.678602 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-13 00:52:09.678613 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-13 00:52:09.678624 | orchestrator | 2026-04-13 00:52:09.678643 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-13 00:52:09.678655 | orchestrator | Monday 13 April 2026 00:50:19 +0000 (0:00:02.532) 0:00:34.732 ********** 2026-04-13 00:52:09.678666 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.678678 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:52:09.678689 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:52:09.678701 | orchestrator | 2026-04-13 00:52:09.678712 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-13 00:52:09.678723 | orchestrator | Monday 13 April 2026 00:50:20 +0000 (0:00:00.825) 0:00:35.558 ********** 2026-04-13 00:52:09.678737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.678761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.678785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:52:09.678798 | orchestrator | 2026-04-13 00:52:09.678809 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-13 00:52:09.678820 | orchestrator | Monday 13 April 2026 00:50:21 +0000 (0:00:01.396) 0:00:36.955 ********** 2026-04-13 00:52:09.678832 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:52:09.678843 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:52:09.678854 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:52:09.678866 | orchestrator | 2026-04-13 00:52:09.678877 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-13 00:52:09.678888 | orchestrator | Monday 13 April 2026 00:50:22 +0000 (0:00:00.878) 0:00:37.833 ********** 2026-04-13 00:52:09.678900 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:52:09.678916 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:52:09.678928 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:52:09.678939 | orchestrator | 2026-04-13 00:52:09.678951 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-13 00:52:09.678962 | orchestrator | Monday 13 April 2026 00:50:29 +0000 (0:00:06.626) 0:00:44.460 ********** 2026-04-13 00:52:09.678974 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:52:09.678985 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:52:09.678996 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:52:09.679008 | orchestrator | 2026-04-13 00:52:09.679019 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-13 00:52:09.679030 | orchestrator | 2026-04-13 00:52:09.679041 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-13 00:52:09.679056 | orchestrator | Monday 13 April 2026 00:50:29 +0000 (0:00:00.337) 0:00:44.797 ********** 2026-04-13 00:52:09.679075 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:52:09.679095 | orchestrator | 2026-04-13 00:52:09.679114 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-13 00:52:09.679132 | orchestrator | Monday 13 April 2026 00:50:30 +0000 (0:00:00.536) 0:00:45.334 ********** 2026-04-13 00:52:09.679151 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:52:09.679171 | orchestrator | 2026-04-13 00:52:09.679190 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-13 00:52:09.679220 | orchestrator | Monday 13 April 2026 00:50:30 +0000 (0:00:00.364) 0:00:45.699 ********** 2026-04-13 00:52:09.679240 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:52:09.679259 | orchestrator | 2026-04-13 00:52:09.679279 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-13 00:52:09.679299 | orchestrator | Monday 13 April 2026 00:50:32 +0000 (0:00:01.838) 0:00:47.537 ********** 2026-04-13 00:52:09.679318 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:52:09.679337 | orchestrator | 2026-04-13 00:52:09.679357 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-13 00:52:09.679376 | orchestrator | 2026-04-13 00:52:09.679395 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-13 00:52:09.679413 | orchestrator | Monday 13 April 2026 00:51:27 +0000 (0:00:55.344) 0:01:42.882 ********** 2026-04-13 00:52:09.679425 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:52:09.679436 | orchestrator | 2026-04-13 00:52:09.679447 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-13 00:52:09.679459 | orchestrator | Monday 13 April 2026 00:51:28 +0000 (0:00:00.678) 0:01:43.560 ********** 2026-04-13 00:52:09.679470 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:52:09.679481 | orchestrator | 2026-04-13 00:52:09.679492 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-13 00:52:09.679504 | orchestrator | Monday 13 April 2026 00:51:28 +0000 (0:00:00.369) 0:01:43.930 ********** 2026-04-13 00:52:09.679543 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:52:09.679555 | orchestrator | 2026-04-13 00:52:09.679566 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-13 00:52:09.679578 | orchestrator | Monday 13 April 2026 00:51:30 +0000 (0:00:02.015) 0:01:45.945 ********** 2026-04-13 00:52:09.679589 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:52:09.679600 | orchestrator | 2026-04-13 00:52:09.679614 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-13 00:52:09.679633 | orchestrator | 2026-04-13 00:52:09.679651 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-13 00:52:09.679671 | orchestrator | Monday 13 April 2026 00:51:46 +0000 (0:00:15.167) 0:02:01.113 ********** 2026-04-13 00:52:09.679690 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:52:09.679708 | orchestrator | 2026-04-13 00:52:09.679740 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-13 00:52:09.679759 | orchestrator | Monday 13 April 2026 00:51:46 +0000 (0:00:00.604) 0:02:01.718 ********** 2026-04-13 00:52:09.679771 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:52:09.679783 | orchestrator | 2026-04-13 00:52:09.679794 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-13 00:52:09.679805 | orchestrator | Monday 13 April 2026 00:51:46 +0000 (0:00:00.217) 0:02:01.935 ********** 2026-04-13 00:52:09.679816 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:52:09.679832 | orchestrator | 2026-04-13 00:52:09.679850 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-13 00:52:09.679867 | orchestrator | Monday 13 April 2026 00:51:48 +0000 (0:00:01.760) 0:02:03.696 ********** 2026-04-13 00:52:09.679885 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:52:09.679903 | orchestrator | 2026-04-13 00:52:09.679920 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-13 00:52:09.679936 | orchestrator | 2026-04-13 00:52:09.679954 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-13 00:52:09.679974 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:15.530) 0:02:19.226 ********** 2026-04-13 00:52:09.679991 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:52:09.680008 | orchestrator | 2026-04-13 00:52:09.680025 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-13 00:52:09.680042 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:00.703) 0:02:19.930 ********** 2026-04-13 00:52:09.680061 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:52:09.680106 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:52:09.680124 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:52:09.680142 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-13 00:52:09.680154 | orchestrator | enable_outward_rabbitmq_True 2026-04-13 00:52:09.680166 | orchestrator | 2026-04-13 00:52:09.680177 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-13 00:52:09.680188 | orchestrator | skipping: no hosts matched 2026-04-13 00:52:09.680199 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-13 00:52:09.680211 | orchestrator | outward_rabbitmq_restart 2026-04-13 00:52:09.680222 | orchestrator | 2026-04-13 00:52:09.680233 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-13 00:52:09.680244 | orchestrator | skipping: no hosts matched 2026-04-13 00:52:09.680256 | orchestrator | 2026-04-13 00:52:09.680267 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-13 00:52:09.680417 | orchestrator | skipping: no hosts matched 2026-04-13 00:52:09.680439 | orchestrator | 2026-04-13 00:52:09.680459 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:52:09.680480 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-13 00:52:09.680503 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-13 00:52:09.680562 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:52:09.680583 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:52:09.680601 | orchestrator | 2026-04-13 00:52:09.680620 | orchestrator | 2026-04-13 00:52:09.680639 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:52:09.680659 | orchestrator | Monday 13 April 2026 00:52:07 +0000 (0:00:02.459) 0:02:22.389 ********** 2026-04-13 00:52:09.680678 | orchestrator | =============================================================================== 2026-04-13 00:52:09.680696 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.04s 2026-04-13 00:52:09.680708 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.63s 2026-04-13 00:52:09.680719 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.61s 2026-04-13 00:52:09.680730 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.40s 2026-04-13 00:52:09.680742 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.39s 2026-04-13 00:52:09.680753 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.64s 2026-04-13 00:52:09.680765 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.53s 2026-04-13 00:52:09.680776 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.46s 2026-04-13 00:52:09.680787 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.36s 2026-04-13 00:52:09.680798 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.82s 2026-04-13 00:52:09.680809 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.77s 2026-04-13 00:52:09.680821 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.76s 2026-04-13 00:52:09.680832 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.73s 2026-04-13 00:52:09.680843 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.66s 2026-04-13 00:52:09.680854 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.65s 2026-04-13 00:52:09.680865 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.48s 2026-04-13 00:52:09.680889 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.40s 2026-04-13 00:52:09.680913 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.38s 2026-04-13 00:52:09.680925 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.20s 2026-04-13 00:52:09.680936 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.12s 2026-04-13 00:52:09.680948 | orchestrator | 2026-04-13 00:52:09 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:09.680959 | orchestrator | 2026-04-13 00:52:09 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:09.680971 | orchestrator | 2026-04-13 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:12.719253 | orchestrator | 2026-04-13 00:52:12 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:12.719363 | orchestrator | 2026-04-13 00:52:12 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:12.719388 | orchestrator | 2026-04-13 00:52:12 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:12.719409 | orchestrator | 2026-04-13 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:15.765433 | orchestrator | 2026-04-13 00:52:15 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:15.765921 | orchestrator | 2026-04-13 00:52:15 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:15.766886 | orchestrator | 2026-04-13 00:52:15 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:15.766915 | orchestrator | 2026-04-13 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:18.819268 | orchestrator | 2026-04-13 00:52:18 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:18.819376 | orchestrator | 2026-04-13 00:52:18 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:18.821013 | orchestrator | 2026-04-13 00:52:18 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:18.821078 | orchestrator | 2026-04-13 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:21.868856 | orchestrator | 2026-04-13 00:52:21 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:21.870417 | orchestrator | 2026-04-13 00:52:21 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:21.872584 | orchestrator | 2026-04-13 00:52:21 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:21.872629 | orchestrator | 2026-04-13 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:24.922271 | orchestrator | 2026-04-13 00:52:24 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:24.924870 | orchestrator | 2026-04-13 00:52:24 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:24.928109 | orchestrator | 2026-04-13 00:52:24 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:24.928640 | orchestrator | 2026-04-13 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:27.982253 | orchestrator | 2026-04-13 00:52:27 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:28.030672 | orchestrator | 2026-04-13 00:52:27 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:28.030773 | orchestrator | 2026-04-13 00:52:27 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:28.030817 | orchestrator | 2026-04-13 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:31.093747 | orchestrator | 2026-04-13 00:52:31 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:31.093826 | orchestrator | 2026-04-13 00:52:31 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:31.093838 | orchestrator | 2026-04-13 00:52:31 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:31.093847 | orchestrator | 2026-04-13 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:34.119982 | orchestrator | 2026-04-13 00:52:34 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:34.121201 | orchestrator | 2026-04-13 00:52:34 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:34.121956 | orchestrator | 2026-04-13 00:52:34 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:34.122111 | orchestrator | 2026-04-13 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:37.169678 | orchestrator | 2026-04-13 00:52:37 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:37.170132 | orchestrator | 2026-04-13 00:52:37 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:37.172329 | orchestrator | 2026-04-13 00:52:37 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:37.172361 | orchestrator | 2026-04-13 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:40.207057 | orchestrator | 2026-04-13 00:52:40 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:40.210446 | orchestrator | 2026-04-13 00:52:40 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:40.214413 | orchestrator | 2026-04-13 00:52:40 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:40.214572 | orchestrator | 2026-04-13 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:43.260404 | orchestrator | 2026-04-13 00:52:43 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:43.261901 | orchestrator | 2026-04-13 00:52:43 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:43.264372 | orchestrator | 2026-04-13 00:52:43 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:43.264443 | orchestrator | 2026-04-13 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:46.310975 | orchestrator | 2026-04-13 00:52:46 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:46.312700 | orchestrator | 2026-04-13 00:52:46 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:46.316043 | orchestrator | 2026-04-13 00:52:46 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:46.316141 | orchestrator | 2026-04-13 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:49.363672 | orchestrator | 2026-04-13 00:52:49 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:49.364908 | orchestrator | 2026-04-13 00:52:49 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:49.365762 | orchestrator | 2026-04-13 00:52:49 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:49.365838 | orchestrator | 2026-04-13 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:52.411453 | orchestrator | 2026-04-13 00:52:52 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:52.412078 | orchestrator | 2026-04-13 00:52:52 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:52.413055 | orchestrator | 2026-04-13 00:52:52 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:52.413737 | orchestrator | 2026-04-13 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:55.458368 | orchestrator | 2026-04-13 00:52:55 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:55.461068 | orchestrator | 2026-04-13 00:52:55 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:55.464287 | orchestrator | 2026-04-13 00:52:55 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:55.464369 | orchestrator | 2026-04-13 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:58.506113 | orchestrator | 2026-04-13 00:52:58 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:52:58.507393 | orchestrator | 2026-04-13 00:52:58 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:52:58.508668 | orchestrator | 2026-04-13 00:52:58 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:52:58.508699 | orchestrator | 2026-04-13 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:01.553071 | orchestrator | 2026-04-13 00:53:01 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:01.554361 | orchestrator | 2026-04-13 00:53:01 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:53:01.556601 | orchestrator | 2026-04-13 00:53:01 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:01.556654 | orchestrator | 2026-04-13 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:04.617052 | orchestrator | 2026-04-13 00:53:04 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:04.617141 | orchestrator | 2026-04-13 00:53:04 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:53:04.617153 | orchestrator | 2026-04-13 00:53:04 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:04.617163 | orchestrator | 2026-04-13 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:07.657181 | orchestrator | 2026-04-13 00:53:07 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:07.659150 | orchestrator | 2026-04-13 00:53:07 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:53:07.662253 | orchestrator | 2026-04-13 00:53:07 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:07.662453 | orchestrator | 2026-04-13 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:10.707655 | orchestrator | 2026-04-13 00:53:10 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:10.708756 | orchestrator | 2026-04-13 00:53:10 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:53:10.709863 | orchestrator | 2026-04-13 00:53:10 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:10.709895 | orchestrator | 2026-04-13 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:13.744400 | orchestrator | 2026-04-13 00:53:13 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:13.745636 | orchestrator | 2026-04-13 00:53:13 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:53:13.746917 | orchestrator | 2026-04-13 00:53:13 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:13.746957 | orchestrator | 2026-04-13 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:16.795813 | orchestrator | 2026-04-13 00:53:16 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:16.797479 | orchestrator | 2026-04-13 00:53:16 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state STARTED 2026-04-13 00:53:16.798844 | orchestrator | 2026-04-13 00:53:16 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:16.799103 | orchestrator | 2026-04-13 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:19.841145 | orchestrator | 2026-04-13 00:53:19 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:19.844768 | orchestrator | 2026-04-13 00:53:19.844860 | orchestrator | 2026-04-13 00:53:19 | INFO  | Task 726c6de6-c911-4e1f-ada4-3ccb8d595d40 is in state SUCCESS 2026-04-13 00:53:19.846374 | orchestrator | 2026-04-13 00:53:19.846445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:53:19.846459 | orchestrator | 2026-04-13 00:53:19.846469 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:53:19.846478 | orchestrator | Monday 13 April 2026 00:50:37 +0000 (0:00:00.200) 0:00:00.200 ********** 2026-04-13 00:53:19.846489 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.846524 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.846534 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.846544 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:53:19.846554 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:53:19.846564 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:53:19.846574 | orchestrator | 2026-04-13 00:53:19.846584 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:53:19.846594 | orchestrator | Monday 13 April 2026 00:50:38 +0000 (0:00:00.757) 0:00:00.957 ********** 2026-04-13 00:53:19.846604 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-13 00:53:19.846615 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-13 00:53:19.846625 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-13 00:53:19.846633 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-13 00:53:19.846640 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-13 00:53:19.846646 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-13 00:53:19.846653 | orchestrator | 2026-04-13 00:53:19.846659 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-13 00:53:19.846665 | orchestrator | 2026-04-13 00:53:19.846671 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-13 00:53:19.846678 | orchestrator | Monday 13 April 2026 00:50:40 +0000 (0:00:02.188) 0:00:03.146 ********** 2026-04-13 00:53:19.846685 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:53:19.846693 | orchestrator | 2026-04-13 00:53:19.846699 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-13 00:53:19.846705 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:01.950) 0:00:05.096 ********** 2026-04-13 00:53:19.846713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846789 | orchestrator | 2026-04-13 00:53:19.846809 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-13 00:53:19.846815 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:01.628) 0:00:06.724 ********** 2026-04-13 00:53:19.846822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846929 | orchestrator | 2026-04-13 00:53:19.846935 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-13 00:53:19.846941 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:02.058) 0:00:08.783 ********** 2026-04-13 00:53:19.846952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.846991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847004 | orchestrator | 2026-04-13 00:53:19.847010 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-13 00:53:19.847016 | orchestrator | Monday 13 April 2026 00:50:49 +0000 (0:00:02.948) 0:00:11.732 ********** 2026-04-13 00:53:19.847022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847064 | orchestrator | 2026-04-13 00:53:19.847074 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-13 00:53:19.847081 | orchestrator | Monday 13 April 2026 00:50:51 +0000 (0:00:02.253) 0:00:13.985 ********** 2026-04-13 00:53:19.847087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.847130 | orchestrator | 2026-04-13 00:53:19.847136 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-13 00:53:19.847143 | orchestrator | Monday 13 April 2026 00:50:53 +0000 (0:00:02.171) 0:00:16.157 ********** 2026-04-13 00:53:19.847149 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.847156 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.847162 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.847168 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:53:19.847174 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:53:19.847180 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:53:19.847186 | orchestrator | 2026-04-13 00:53:19.847192 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-13 00:53:19.847202 | orchestrator | Monday 13 April 2026 00:50:56 +0000 (0:00:03.089) 0:00:19.246 ********** 2026-04-13 00:53:19.847208 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-13 00:53:19.847215 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-13 00:53:19.847221 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-13 00:53:19.847228 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-13 00:53:19.847234 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-13 00:53:19.847240 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-13 00:53:19.847246 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-13 00:53:19.847252 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-13 00:53:19.847266 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-13 00:53:19.847273 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-13 00:53:19.847279 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-13 00:53:19.847287 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-13 00:53:19.847294 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-13 00:53:19.847300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-13 00:53:19.847306 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-13 00:53:19.847313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-13 00:53:19.847322 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-13 00:53:19.847332 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-13 00:53:19.847342 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-13 00:53:19.847353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-13 00:53:19.847363 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-13 00:53:19.847374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-13 00:53:19.847385 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-13 00:53:19.847395 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-13 00:53:19.847401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-13 00:53:19.847407 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-13 00:53:19.847414 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-13 00:53:19.847420 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-13 00:53:19.847426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-13 00:53:19.847432 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-13 00:53:19.847438 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-13 00:53:19.847445 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-13 00:53:19.847451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-13 00:53:19.847457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-13 00:53:19.847463 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-13 00:53:19.847470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-13 00:53:19.847476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-13 00:53:19.847486 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-13 00:53:19.847556 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-13 00:53:19.847565 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-13 00:53:19.847572 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-13 00:53:19.847578 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-13 00:53:19.847585 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-13 00:53:19.847591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-13 00:53:19.847602 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-13 00:53:19.847610 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-13 00:53:19.847616 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-13 00:53:19.847622 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-13 00:53:19.847629 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-13 00:53:19.847636 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-13 00:53:19.847642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-13 00:53:19.847648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-13 00:53:19.847654 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-13 00:53:19.847661 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-13 00:53:19.847667 | orchestrator | 2026-04-13 00:53:19.847673 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-13 00:53:19.847679 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:21.512) 0:00:40.759 ********** 2026-04-13 00:53:19.847686 | orchestrator | 2026-04-13 00:53:19.847692 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-13 00:53:19.847698 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.067) 0:00:40.827 ********** 2026-04-13 00:53:19.847704 | orchestrator | 2026-04-13 00:53:19.847710 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-13 00:53:19.847716 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.071) 0:00:40.898 ********** 2026-04-13 00:53:19.847723 | orchestrator | 2026-04-13 00:53:19.847729 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-13 00:53:19.847735 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.066) 0:00:40.965 ********** 2026-04-13 00:53:19.847742 | orchestrator | 2026-04-13 00:53:19.847748 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-13 00:53:19.847754 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.069) 0:00:41.035 ********** 2026-04-13 00:53:19.847760 | orchestrator | 2026-04-13 00:53:19.847766 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-13 00:53:19.847772 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.072) 0:00:41.107 ********** 2026-04-13 00:53:19.847784 | orchestrator | 2026-04-13 00:53:19.847790 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-13 00:53:19.847796 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.243) 0:00:41.351 ********** 2026-04-13 00:53:19.847802 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:53:19.847809 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.847816 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.847822 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.847828 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:53:19.847835 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:53:19.847842 | orchestrator | 2026-04-13 00:53:19.847848 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-13 00:53:19.847854 | orchestrator | Monday 13 April 2026 00:51:21 +0000 (0:00:03.266) 0:00:44.617 ********** 2026-04-13 00:53:19.847860 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.847867 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.847873 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:53:19.847880 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:53:19.847886 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:53:19.847892 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.847898 | orchestrator | 2026-04-13 00:53:19.847904 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-13 00:53:19.847910 | orchestrator | 2026-04-13 00:53:19.847917 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-13 00:53:19.847923 | orchestrator | Monday 13 April 2026 00:51:58 +0000 (0:00:36.304) 0:01:20.922 ********** 2026-04-13 00:53:19.847933 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:19.847939 | orchestrator | 2026-04-13 00:53:19.847945 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-13 00:53:19.847952 | orchestrator | Monday 13 April 2026 00:51:58 +0000 (0:00:00.528) 0:01:21.451 ********** 2026-04-13 00:53:19.847958 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:19.847965 | orchestrator | 2026-04-13 00:53:19.847971 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-13 00:53:19.847977 | orchestrator | Monday 13 April 2026 00:51:59 +0000 (0:00:00.758) 0:01:22.209 ********** 2026-04-13 00:53:19.847983 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.847989 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.847996 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.848002 | orchestrator | 2026-04-13 00:53:19.848008 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-13 00:53:19.848014 | orchestrator | Monday 13 April 2026 00:52:00 +0000 (0:00:00.812) 0:01:23.022 ********** 2026-04-13 00:53:19.848020 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.848026 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.848033 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.848039 | orchestrator | 2026-04-13 00:53:19.848049 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-13 00:53:19.848056 | orchestrator | Monday 13 April 2026 00:52:00 +0000 (0:00:00.358) 0:01:23.380 ********** 2026-04-13 00:53:19.848062 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.848068 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.848074 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.848081 | orchestrator | 2026-04-13 00:53:19.848087 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-13 00:53:19.848093 | orchestrator | Monday 13 April 2026 00:52:01 +0000 (0:00:00.557) 0:01:23.938 ********** 2026-04-13 00:53:19.848099 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.848106 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.848112 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.848118 | orchestrator | 2026-04-13 00:53:19.848124 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-13 00:53:19.848136 | orchestrator | Monday 13 April 2026 00:52:01 +0000 (0:00:00.398) 0:01:24.336 ********** 2026-04-13 00:53:19.848142 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.848148 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.848155 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.848161 | orchestrator | 2026-04-13 00:53:19.848168 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-13 00:53:19.848174 | orchestrator | Monday 13 April 2026 00:52:02 +0000 (0:00:00.349) 0:01:24.685 ********** 2026-04-13 00:53:19.848180 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848187 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848193 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848200 | orchestrator | 2026-04-13 00:53:19.848206 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-13 00:53:19.848212 | orchestrator | Monday 13 April 2026 00:52:02 +0000 (0:00:00.271) 0:01:24.957 ********** 2026-04-13 00:53:19.848219 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848225 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848231 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848237 | orchestrator | 2026-04-13 00:53:19.848243 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-13 00:53:19.848250 | orchestrator | Monday 13 April 2026 00:52:02 +0000 (0:00:00.540) 0:01:25.497 ********** 2026-04-13 00:53:19.848256 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848262 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848268 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848274 | orchestrator | 2026-04-13 00:53:19.848280 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-13 00:53:19.848287 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:00.316) 0:01:25.813 ********** 2026-04-13 00:53:19.848293 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848299 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848305 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848311 | orchestrator | 2026-04-13 00:53:19.848317 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-13 00:53:19.848323 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:00.328) 0:01:26.141 ********** 2026-04-13 00:53:19.848330 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848336 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848342 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848348 | orchestrator | 2026-04-13 00:53:19.848354 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-13 00:53:19.848360 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:00.310) 0:01:26.452 ********** 2026-04-13 00:53:19.848367 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848373 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848379 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848385 | orchestrator | 2026-04-13 00:53:19.848391 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-13 00:53:19.848398 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:00.292) 0:01:26.744 ********** 2026-04-13 00:53:19.848404 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848410 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848416 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848423 | orchestrator | 2026-04-13 00:53:19.848430 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-13 00:53:19.848440 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:00.540) 0:01:27.284 ********** 2026-04-13 00:53:19.848450 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848461 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848470 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848476 | orchestrator | 2026-04-13 00:53:19.848482 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-13 00:53:19.848488 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:00.322) 0:01:27.607 ********** 2026-04-13 00:53:19.848518 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848525 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848531 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848537 | orchestrator | 2026-04-13 00:53:19.848543 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-13 00:53:19.848549 | orchestrator | Monday 13 April 2026 00:52:05 +0000 (0:00:00.416) 0:01:28.023 ********** 2026-04-13 00:53:19.848555 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848561 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848567 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848573 | orchestrator | 2026-04-13 00:53:19.848579 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-13 00:53:19.848585 | orchestrator | Monday 13 April 2026 00:52:05 +0000 (0:00:00.351) 0:01:28.375 ********** 2026-04-13 00:53:19.848591 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848597 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848603 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848609 | orchestrator | 2026-04-13 00:53:19.848616 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-13 00:53:19.848622 | orchestrator | Monday 13 April 2026 00:52:06 +0000 (0:00:00.621) 0:01:28.996 ********** 2026-04-13 00:53:19.848628 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848634 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848645 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848652 | orchestrator | 2026-04-13 00:53:19.848658 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-13 00:53:19.848664 | orchestrator | Monday 13 April 2026 00:52:06 +0000 (0:00:00.326) 0:01:29.323 ********** 2026-04-13 00:53:19.848671 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:19.848677 | orchestrator | 2026-04-13 00:53:19.848683 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-13 00:53:19.848689 | orchestrator | Monday 13 April 2026 00:52:07 +0000 (0:00:00.614) 0:01:29.938 ********** 2026-04-13 00:53:19.848695 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.848702 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.848708 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.848714 | orchestrator | 2026-04-13 00:53:19.848720 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-13 00:53:19.848726 | orchestrator | Monday 13 April 2026 00:52:08 +0000 (0:00:00.791) 0:01:30.729 ********** 2026-04-13 00:53:19.848733 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.848739 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.848745 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.848751 | orchestrator | 2026-04-13 00:53:19.848757 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-13 00:53:19.848763 | orchestrator | Monday 13 April 2026 00:52:08 +0000 (0:00:00.476) 0:01:31.205 ********** 2026-04-13 00:53:19.848769 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848775 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848781 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848787 | orchestrator | 2026-04-13 00:53:19.848794 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-13 00:53:19.848800 | orchestrator | Monday 13 April 2026 00:52:08 +0000 (0:00:00.376) 0:01:31.582 ********** 2026-04-13 00:53:19.848806 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848812 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848818 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848824 | orchestrator | 2026-04-13 00:53:19.848830 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-13 00:53:19.848836 | orchestrator | Monday 13 April 2026 00:52:09 +0000 (0:00:00.388) 0:01:31.970 ********** 2026-04-13 00:53:19.848842 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848856 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848862 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848868 | orchestrator | 2026-04-13 00:53:19.848875 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-13 00:53:19.848881 | orchestrator | Monday 13 April 2026 00:52:09 +0000 (0:00:00.620) 0:01:32.591 ********** 2026-04-13 00:53:19.848887 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848893 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848899 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848905 | orchestrator | 2026-04-13 00:53:19.848911 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-13 00:53:19.848917 | orchestrator | Monday 13 April 2026 00:52:10 +0000 (0:00:00.358) 0:01:32.949 ********** 2026-04-13 00:53:19.848923 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848929 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848935 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848942 | orchestrator | 2026-04-13 00:53:19.848948 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-13 00:53:19.848954 | orchestrator | Monday 13 April 2026 00:52:10 +0000 (0:00:00.338) 0:01:33.287 ********** 2026-04-13 00:53:19.848960 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.848966 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.848972 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.848978 | orchestrator | 2026-04-13 00:53:19.848984 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-13 00:53:19.848990 | orchestrator | Monday 13 April 2026 00:52:10 +0000 (0:00:00.302) 0:01:33.590 ********** 2026-04-13 00:53:19.848997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849073 | orchestrator | 2026-04-13 00:53:19.849080 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-13 00:53:19.849086 | orchestrator | Monday 13 April 2026 00:52:12 +0000 (0:00:01.770) 0:01:35.360 ********** 2026-04-13 00:53:19.849093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849162 | orchestrator | 2026-04-13 00:53:19.849169 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-13 00:53:19.849175 | orchestrator | Monday 13 April 2026 00:52:16 +0000 (0:00:04.126) 0:01:39.486 ********** 2026-04-13 00:53:19.849181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849253 | orchestrator | 2026-04-13 00:53:19.849259 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-13 00:53:19.849266 | orchestrator | Monday 13 April 2026 00:52:18 +0000 (0:00:02.100) 0:01:41.587 ********** 2026-04-13 00:53:19.849272 | orchestrator | 2026-04-13 00:53:19.849278 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-13 00:53:19.849284 | orchestrator | Monday 13 April 2026 00:52:19 +0000 (0:00:00.074) 0:01:41.662 ********** 2026-04-13 00:53:19.849290 | orchestrator | 2026-04-13 00:53:19.849296 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-13 00:53:19.849302 | orchestrator | Monday 13 April 2026 00:52:19 +0000 (0:00:00.073) 0:01:41.735 ********** 2026-04-13 00:53:19.849308 | orchestrator | 2026-04-13 00:53:19.849314 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-13 00:53:19.849320 | orchestrator | Monday 13 April 2026 00:52:19 +0000 (0:00:00.071) 0:01:41.807 ********** 2026-04-13 00:53:19.849327 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.849333 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.849339 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.849345 | orchestrator | 2026-04-13 00:53:19.849355 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-13 00:53:19.849366 | orchestrator | Monday 13 April 2026 00:52:26 +0000 (0:00:07.419) 0:01:49.227 ********** 2026-04-13 00:53:19.849376 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.849383 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.849389 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.849395 | orchestrator | 2026-04-13 00:53:19.849402 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-13 00:53:19.849408 | orchestrator | Monday 13 April 2026 00:52:35 +0000 (0:00:08.434) 0:01:57.661 ********** 2026-04-13 00:53:19.849414 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.849420 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.849426 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.849432 | orchestrator | 2026-04-13 00:53:19.849438 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-13 00:53:19.849445 | orchestrator | Monday 13 April 2026 00:52:37 +0000 (0:00:02.590) 0:02:00.252 ********** 2026-04-13 00:53:19.849451 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.849457 | orchestrator | 2026-04-13 00:53:19.849463 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-13 00:53:19.849469 | orchestrator | Monday 13 April 2026 00:52:37 +0000 (0:00:00.232) 0:02:00.484 ********** 2026-04-13 00:53:19.849475 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.849481 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.849492 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.849528 | orchestrator | 2026-04-13 00:53:19.849539 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-13 00:53:19.849553 | orchestrator | Monday 13 April 2026 00:52:38 +0000 (0:00:01.117) 0:02:01.602 ********** 2026-04-13 00:53:19.849562 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.849572 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.849583 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.849593 | orchestrator | 2026-04-13 00:53:19.849600 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-13 00:53:19.849606 | orchestrator | Monday 13 April 2026 00:52:39 +0000 (0:00:00.759) 0:02:02.361 ********** 2026-04-13 00:53:19.849611 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.849618 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.849623 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.849629 | orchestrator | 2026-04-13 00:53:19.849636 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-13 00:53:19.849642 | orchestrator | Monday 13 April 2026 00:52:40 +0000 (0:00:01.105) 0:02:03.467 ********** 2026-04-13 00:53:19.849648 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.849654 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.849660 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.849665 | orchestrator | 2026-04-13 00:53:19.849672 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-13 00:53:19.849678 | orchestrator | Monday 13 April 2026 00:52:41 +0000 (0:00:00.754) 0:02:04.222 ********** 2026-04-13 00:53:19.849684 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.849690 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.849701 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.849707 | orchestrator | 2026-04-13 00:53:19.849713 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-13 00:53:19.849720 | orchestrator | Monday 13 April 2026 00:52:42 +0000 (0:00:00.982) 0:02:05.205 ********** 2026-04-13 00:53:19.849727 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.849733 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.849739 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.849745 | orchestrator | 2026-04-13 00:53:19.849751 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-13 00:53:19.849757 | orchestrator | Monday 13 April 2026 00:52:43 +0000 (0:00:00.754) 0:02:05.959 ********** 2026-04-13 00:53:19.849764 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.849770 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.849776 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.849782 | orchestrator | 2026-04-13 00:53:19.849788 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-13 00:53:19.849794 | orchestrator | Monday 13 April 2026 00:52:43 +0000 (0:00:00.512) 0:02:06.472 ********** 2026-04-13 00:53:19.849801 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849807 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849813 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849828 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849835 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849841 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849852 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849858 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849869 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849875 | orchestrator | 2026-04-13 00:53:19.849881 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-13 00:53:19.849888 | orchestrator | Monday 13 April 2026 00:52:45 +0000 (0:00:01.343) 0:02:07.816 ********** 2026-04-13 00:53:19.849894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849908 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849914 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849938 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849961 | orchestrator | 2026-04-13 00:53:19.849967 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-13 00:53:19.849973 | orchestrator | Monday 13 April 2026 00:52:49 +0000 (0:00:03.878) 0:02:11.695 ********** 2026-04-13 00:53:19.849984 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849990 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.849997 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.850003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.850014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.850074 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.850081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.850088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.850099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:53:19.850105 | orchestrator | 2026-04-13 00:53:19.850111 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-13 00:53:19.850117 | orchestrator | Monday 13 April 2026 00:52:52 +0000 (0:00:03.108) 0:02:14.804 ********** 2026-04-13 00:53:19.850124 | orchestrator | 2026-04-13 00:53:19.850130 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-13 00:53:19.850136 | orchestrator | Monday 13 April 2026 00:52:52 +0000 (0:00:00.143) 0:02:14.947 ********** 2026-04-13 00:53:19.850142 | orchestrator | 2026-04-13 00:53:19.850151 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-13 00:53:19.850162 | orchestrator | Monday 13 April 2026 00:52:52 +0000 (0:00:00.356) 0:02:15.304 ********** 2026-04-13 00:53:19.850173 | orchestrator | 2026-04-13 00:53:19.850180 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-13 00:53:19.850187 | orchestrator | Monday 13 April 2026 00:52:52 +0000 (0:00:00.065) 0:02:15.369 ********** 2026-04-13 00:53:19.850193 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.850199 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.850205 | orchestrator | 2026-04-13 00:53:19.850216 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-13 00:53:19.850224 | orchestrator | Monday 13 April 2026 00:52:59 +0000 (0:00:06.285) 0:02:21.654 ********** 2026-04-13 00:53:19.850230 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.850236 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.850242 | orchestrator | 2026-04-13 00:53:19.850248 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-13 00:53:19.850262 | orchestrator | Monday 13 April 2026 00:53:05 +0000 (0:00:06.287) 0:02:27.942 ********** 2026-04-13 00:53:19.850268 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:19.850274 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:19.850280 | orchestrator | 2026-04-13 00:53:19.850286 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-13 00:53:19.850292 | orchestrator | Monday 13 April 2026 00:53:11 +0000 (0:00:06.323) 0:02:34.265 ********** 2026-04-13 00:53:19.850298 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:19.850304 | orchestrator | 2026-04-13 00:53:19.850311 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-13 00:53:19.850317 | orchestrator | Monday 13 April 2026 00:53:11 +0000 (0:00:00.135) 0:02:34.401 ********** 2026-04-13 00:53:19.850323 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.850329 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.850335 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.850341 | orchestrator | 2026-04-13 00:53:19.850347 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-13 00:53:19.850353 | orchestrator | Monday 13 April 2026 00:53:12 +0000 (0:00:00.768) 0:02:35.169 ********** 2026-04-13 00:53:19.850359 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.850365 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.850372 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.850378 | orchestrator | 2026-04-13 00:53:19.850384 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-13 00:53:19.850390 | orchestrator | Monday 13 April 2026 00:53:13 +0000 (0:00:00.680) 0:02:35.850 ********** 2026-04-13 00:53:19.850396 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.850402 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.850408 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.850414 | orchestrator | 2026-04-13 00:53:19.850420 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-13 00:53:19.850426 | orchestrator | Monday 13 April 2026 00:53:14 +0000 (0:00:00.794) 0:02:36.644 ********** 2026-04-13 00:53:19.850432 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:19.850438 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:19.850444 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:19.850450 | orchestrator | 2026-04-13 00:53:19.850456 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-13 00:53:19.850462 | orchestrator | Monday 13 April 2026 00:53:14 +0000 (0:00:00.657) 0:02:37.301 ********** 2026-04-13 00:53:19.850469 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.850475 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.850481 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.850487 | orchestrator | 2026-04-13 00:53:19.850507 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-13 00:53:19.850519 | orchestrator | Monday 13 April 2026 00:53:15 +0000 (0:00:00.730) 0:02:38.032 ********** 2026-04-13 00:53:19.850530 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:19.850536 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:19.850542 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:19.850548 | orchestrator | 2026-04-13 00:53:19.850554 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:53:19.850561 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-13 00:53:19.850568 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-13 00:53:19.850574 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-13 00:53:19.850580 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:53:19.850591 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:53:19.850601 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:53:19.850607 | orchestrator | 2026-04-13 00:53:19.850613 | orchestrator | 2026-04-13 00:53:19.850620 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:53:19.850626 | orchestrator | Monday 13 April 2026 00:53:16 +0000 (0:00:01.457) 0:02:39.490 ********** 2026-04-13 00:53:19.850632 | orchestrator | =============================================================================== 2026-04-13 00:53:19.850638 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.31s 2026-04-13 00:53:19.850645 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.51s 2026-04-13 00:53:19.850651 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.72s 2026-04-13 00:53:19.850657 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.71s 2026-04-13 00:53:19.850663 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.91s 2026-04-13 00:53:19.850669 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.13s 2026-04-13 00:53:19.850676 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2026-04-13 00:53:19.850687 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 3.27s 2026-04-13 00:53:19.850693 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.11s 2026-04-13 00:53:19.850699 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.09s 2026-04-13 00:53:19.850705 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.95s 2026-04-13 00:53:19.850711 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.25s 2026-04-13 00:53:19.850717 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.19s 2026-04-13 00:53:19.850723 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.17s 2026-04-13 00:53:19.850729 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.10s 2026-04-13 00:53:19.850735 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.06s 2026-04-13 00:53:19.850741 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.95s 2026-04-13 00:53:19.850748 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.77s 2026-04-13 00:53:19.850754 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.63s 2026-04-13 00:53:19.850760 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.46s 2026-04-13 00:53:19.850767 | orchestrator | 2026-04-13 00:53:19 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:19.850773 | orchestrator | 2026-04-13 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:22.900232 | orchestrator | 2026-04-13 00:53:22 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:22.903223 | orchestrator | 2026-04-13 00:53:22 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:22.903366 | orchestrator | 2026-04-13 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:25.940899 | orchestrator | 2026-04-13 00:53:25 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:25.944345 | orchestrator | 2026-04-13 00:53:25 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:25.944574 | orchestrator | 2026-04-13 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:28.981032 | orchestrator | 2026-04-13 00:53:28 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:28.982624 | orchestrator | 2026-04-13 00:53:28 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:28.982743 | orchestrator | 2026-04-13 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:32.032726 | orchestrator | 2026-04-13 00:53:32 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:32.034636 | orchestrator | 2026-04-13 00:53:32 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:32.034682 | orchestrator | 2026-04-13 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:35.079146 | orchestrator | 2026-04-13 00:53:35 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:35.079252 | orchestrator | 2026-04-13 00:53:35 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:35.079739 | orchestrator | 2026-04-13 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:38.113973 | orchestrator | 2026-04-13 00:53:38 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:38.117186 | orchestrator | 2026-04-13 00:53:38 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:38.117246 | orchestrator | 2026-04-13 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:41.146222 | orchestrator | 2026-04-13 00:53:41 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:41.147426 | orchestrator | 2026-04-13 00:53:41 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:41.147640 | orchestrator | 2026-04-13 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:44.197752 | orchestrator | 2026-04-13 00:53:44 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:44.199710 | orchestrator | 2026-04-13 00:53:44 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:44.199785 | orchestrator | 2026-04-13 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:47.230308 | orchestrator | 2026-04-13 00:53:47 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:47.230758 | orchestrator | 2026-04-13 00:53:47 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:47.230778 | orchestrator | 2026-04-13 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:50.269802 | orchestrator | 2026-04-13 00:53:50 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:50.269902 | orchestrator | 2026-04-13 00:53:50 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:50.269916 | orchestrator | 2026-04-13 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:53.330276 | orchestrator | 2026-04-13 00:53:53 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:53.332966 | orchestrator | 2026-04-13 00:53:53 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:53.333026 | orchestrator | 2026-04-13 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:56.376788 | orchestrator | 2026-04-13 00:53:56 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:56.376894 | orchestrator | 2026-04-13 00:53:56 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:56.377291 | orchestrator | 2026-04-13 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:59.424089 | orchestrator | 2026-04-13 00:53:59 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:53:59.428599 | orchestrator | 2026-04-13 00:53:59 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:53:59.428685 | orchestrator | 2026-04-13 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:02.487684 | orchestrator | 2026-04-13 00:54:02 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:02.487834 | orchestrator | 2026-04-13 00:54:02 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:02.487919 | orchestrator | 2026-04-13 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:05.538471 | orchestrator | 2026-04-13 00:54:05 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:05.539703 | orchestrator | 2026-04-13 00:54:05 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:05.539729 | orchestrator | 2026-04-13 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:08.585677 | orchestrator | 2026-04-13 00:54:08 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:08.585780 | orchestrator | 2026-04-13 00:54:08 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:08.585798 | orchestrator | 2026-04-13 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:11.751983 | orchestrator | 2026-04-13 00:54:11 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:11.759521 | orchestrator | 2026-04-13 00:54:11 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:11.766540 | orchestrator | 2026-04-13 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:14.825997 | orchestrator | 2026-04-13 00:54:14 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:14.828260 | orchestrator | 2026-04-13 00:54:14 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:14.828768 | orchestrator | 2026-04-13 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:17.876250 | orchestrator | 2026-04-13 00:54:17 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:17.878313 | orchestrator | 2026-04-13 00:54:17 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:17.879001 | orchestrator | 2026-04-13 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:20.930645 | orchestrator | 2026-04-13 00:54:20 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:20.932278 | orchestrator | 2026-04-13 00:54:20 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:20.932509 | orchestrator | 2026-04-13 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:23.989851 | orchestrator | 2026-04-13 00:54:23 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:23.991840 | orchestrator | 2026-04-13 00:54:23 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:23.991918 | orchestrator | 2026-04-13 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:27.046126 | orchestrator | 2026-04-13 00:54:27 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:27.047188 | orchestrator | 2026-04-13 00:54:27 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:27.047327 | orchestrator | 2026-04-13 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:30.085780 | orchestrator | 2026-04-13 00:54:30 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:30.092625 | orchestrator | 2026-04-13 00:54:30 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:30.092718 | orchestrator | 2026-04-13 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:33.140837 | orchestrator | 2026-04-13 00:54:33 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:33.141570 | orchestrator | 2026-04-13 00:54:33 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:33.144264 | orchestrator | 2026-04-13 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:36.190375 | orchestrator | 2026-04-13 00:54:36 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:36.193091 | orchestrator | 2026-04-13 00:54:36 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:36.193161 | orchestrator | 2026-04-13 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:39.225803 | orchestrator | 2026-04-13 00:54:39 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:39.225903 | orchestrator | 2026-04-13 00:54:39 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:39.225919 | orchestrator | 2026-04-13 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:42.290581 | orchestrator | 2026-04-13 00:54:42 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:42.294410 | orchestrator | 2026-04-13 00:54:42 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:42.294552 | orchestrator | 2026-04-13 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:45.349204 | orchestrator | 2026-04-13 00:54:45 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:45.351978 | orchestrator | 2026-04-13 00:54:45 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:45.352028 | orchestrator | 2026-04-13 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:48.398969 | orchestrator | 2026-04-13 00:54:48 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:48.401774 | orchestrator | 2026-04-13 00:54:48 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:48.401830 | orchestrator | 2026-04-13 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:51.444909 | orchestrator | 2026-04-13 00:54:51 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:51.446047 | orchestrator | 2026-04-13 00:54:51 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:51.446078 | orchestrator | 2026-04-13 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:54.496111 | orchestrator | 2026-04-13 00:54:54 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:54.498486 | orchestrator | 2026-04-13 00:54:54 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:54.498701 | orchestrator | 2026-04-13 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:57.538845 | orchestrator | 2026-04-13 00:54:57 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:54:57.539240 | orchestrator | 2026-04-13 00:54:57 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:54:57.539495 | orchestrator | 2026-04-13 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:00.595121 | orchestrator | 2026-04-13 00:55:00 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:00.596561 | orchestrator | 2026-04-13 00:55:00 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:00.596658 | orchestrator | 2026-04-13 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:03.645018 | orchestrator | 2026-04-13 00:55:03 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:03.647056 | orchestrator | 2026-04-13 00:55:03 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:03.647135 | orchestrator | 2026-04-13 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:06.684650 | orchestrator | 2026-04-13 00:55:06 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:06.684760 | orchestrator | 2026-04-13 00:55:06 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:06.684776 | orchestrator | 2026-04-13 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:09.728896 | orchestrator | 2026-04-13 00:55:09 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:09.730903 | orchestrator | 2026-04-13 00:55:09 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:09.730945 | orchestrator | 2026-04-13 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:12.773434 | orchestrator | 2026-04-13 00:55:12 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:12.773922 | orchestrator | 2026-04-13 00:55:12 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:12.773955 | orchestrator | 2026-04-13 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:15.812090 | orchestrator | 2026-04-13 00:55:15 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:15.813250 | orchestrator | 2026-04-13 00:55:15 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:15.813316 | orchestrator | 2026-04-13 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:18.858830 | orchestrator | 2026-04-13 00:55:18 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:18.860404 | orchestrator | 2026-04-13 00:55:18 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:18.860492 | orchestrator | 2026-04-13 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:21.897483 | orchestrator | 2026-04-13 00:55:21 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:21.899986 | orchestrator | 2026-04-13 00:55:21 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:21.900052 | orchestrator | 2026-04-13 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:24.944004 | orchestrator | 2026-04-13 00:55:24 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:24.944112 | orchestrator | 2026-04-13 00:55:24 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:24.944127 | orchestrator | 2026-04-13 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:27.994260 | orchestrator | 2026-04-13 00:55:27 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:27.996934 | orchestrator | 2026-04-13 00:55:27 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:27.996995 | orchestrator | 2026-04-13 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:31.050665 | orchestrator | 2026-04-13 00:55:31 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:31.050791 | orchestrator | 2026-04-13 00:55:31 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:31.050810 | orchestrator | 2026-04-13 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:34.100536 | orchestrator | 2026-04-13 00:55:34 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:34.101073 | orchestrator | 2026-04-13 00:55:34 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:34.101113 | orchestrator | 2026-04-13 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:37.145342 | orchestrator | 2026-04-13 00:55:37 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:37.148827 | orchestrator | 2026-04-13 00:55:37 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:37.148921 | orchestrator | 2026-04-13 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:40.201242 | orchestrator | 2026-04-13 00:55:40 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:40.204794 | orchestrator | 2026-04-13 00:55:40 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:40.207509 | orchestrator | 2026-04-13 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:43.243771 | orchestrator | 2026-04-13 00:55:43 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:43.245401 | orchestrator | 2026-04-13 00:55:43 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:43.245633 | orchestrator | 2026-04-13 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:46.292660 | orchestrator | 2026-04-13 00:55:46 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:46.293827 | orchestrator | 2026-04-13 00:55:46 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:46.294098 | orchestrator | 2026-04-13 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:49.352376 | orchestrator | 2026-04-13 00:55:49 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:49.354611 | orchestrator | 2026-04-13 00:55:49 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:49.354701 | orchestrator | 2026-04-13 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:52.411531 | orchestrator | 2026-04-13 00:55:52 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:52.412378 | orchestrator | 2026-04-13 00:55:52 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:52.413031 | orchestrator | 2026-04-13 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:55.465878 | orchestrator | 2026-04-13 00:55:55 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:55.468147 | orchestrator | 2026-04-13 00:55:55 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:55.468205 | orchestrator | 2026-04-13 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:58.525343 | orchestrator | 2026-04-13 00:55:58 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:55:58.527141 | orchestrator | 2026-04-13 00:55:58 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:55:58.527196 | orchestrator | 2026-04-13 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:01.571723 | orchestrator | 2026-04-13 00:56:01 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:01.574252 | orchestrator | 2026-04-13 00:56:01 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:56:01.574292 | orchestrator | 2026-04-13 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:04.622604 | orchestrator | 2026-04-13 00:56:04 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:04.623554 | orchestrator | 2026-04-13 00:56:04 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:56:04.623606 | orchestrator | 2026-04-13 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:07.662362 | orchestrator | 2026-04-13 00:56:07 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:07.662547 | orchestrator | 2026-04-13 00:56:07 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:56:07.662567 | orchestrator | 2026-04-13 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:10.739125 | orchestrator | 2026-04-13 00:56:10 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:10.740927 | orchestrator | 2026-04-13 00:56:10 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:56:10.740949 | orchestrator | 2026-04-13 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:13.810694 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:13.811047 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:56:13.811061 | orchestrator | 2026-04-13 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:16.858284 | orchestrator | 2026-04-13 00:56:16 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:16.858742 | orchestrator | 2026-04-13 00:56:16 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state STARTED 2026-04-13 00:56:16.859086 | orchestrator | 2026-04-13 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:19.899473 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:19.901377 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:19.903689 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:19.913676 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task 1aec2b85-067c-458a-9450-2f1f49318bb3 is in state SUCCESS 2026-04-13 00:56:19.916766 | orchestrator | 2026-04-13 00:56:19.916818 | orchestrator | 2026-04-13 00:56:19.916832 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:56:19.916844 | orchestrator | 2026-04-13 00:56:19.916856 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:56:19.916890 | orchestrator | Monday 13 April 2026 00:49:25 +0000 (0:00:00.744) 0:00:00.744 ********** 2026-04-13 00:56:19.916902 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.916914 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.916925 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.916936 | orchestrator | 2026-04-13 00:56:19.916948 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:56:19.916959 | orchestrator | Monday 13 April 2026 00:49:26 +0000 (0:00:00.670) 0:00:01.415 ********** 2026-04-13 00:56:19.916971 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-13 00:56:19.916982 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-13 00:56:19.916993 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-13 00:56:19.917004 | orchestrator | 2026-04-13 00:56:19.917015 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-13 00:56:19.917026 | orchestrator | 2026-04-13 00:56:19.917037 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-13 00:56:19.917048 | orchestrator | Monday 13 April 2026 00:49:27 +0000 (0:00:00.446) 0:00:01.861 ********** 2026-04-13 00:56:19.917059 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.917071 | orchestrator | 2026-04-13 00:56:19.917082 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-13 00:56:19.917094 | orchestrator | Monday 13 April 2026 00:49:28 +0000 (0:00:01.163) 0:00:03.025 ********** 2026-04-13 00:56:19.917105 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.917116 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.917127 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.917138 | orchestrator | 2026-04-13 00:56:19.917150 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-13 00:56:19.917221 | orchestrator | Monday 13 April 2026 00:49:29 +0000 (0:00:01.646) 0:00:04.672 ********** 2026-04-13 00:56:19.917235 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.917247 | orchestrator | 2026-04-13 00:56:19.917258 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-13 00:56:19.917270 | orchestrator | Monday 13 April 2026 00:49:30 +0000 (0:00:01.125) 0:00:05.797 ********** 2026-04-13 00:56:19.917281 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.917292 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.917304 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.917315 | orchestrator | 2026-04-13 00:56:19.917327 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-13 00:56:19.917338 | orchestrator | Monday 13 April 2026 00:49:33 +0000 (0:00:02.191) 0:00:07.989 ********** 2026-04-13 00:56:19.917350 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:56:19.917363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:56:19.917376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:56:19.917389 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:56:19.917401 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:56:19.917443 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-13 00:56:19.917458 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-13 00:56:19.917471 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-13 00:56:19.917484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-13 00:56:19.917496 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:56:19.917647 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-13 00:56:19.917661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-13 00:56:19.917674 | orchestrator | 2026-04-13 00:56:19.917686 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-13 00:56:19.917756 | orchestrator | Monday 13 April 2026 00:49:38 +0000 (0:00:05.222) 0:00:13.212 ********** 2026-04-13 00:56:19.917769 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-13 00:56:19.917851 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-13 00:56:19.917865 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-13 00:56:19.917877 | orchestrator | 2026-04-13 00:56:19.917889 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-13 00:56:19.917901 | orchestrator | Monday 13 April 2026 00:49:39 +0000 (0:00:01.441) 0:00:14.653 ********** 2026-04-13 00:56:19.917914 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-13 00:56:19.917926 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-13 00:56:19.917938 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-13 00:56:19.917950 | orchestrator | 2026-04-13 00:56:19.917963 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-13 00:56:19.917975 | orchestrator | Monday 13 April 2026 00:49:43 +0000 (0:00:03.489) 0:00:18.142 ********** 2026-04-13 00:56:19.918011 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-13 00:56:19.918071 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.918099 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-13 00:56:19.918111 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.918123 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-13 00:56:19.918134 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.918178 | orchestrator | 2026-04-13 00:56:19.918190 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-13 00:56:19.918201 | orchestrator | Monday 13 April 2026 00:49:44 +0000 (0:00:00.803) 0:00:18.945 ********** 2026-04-13 00:56:19.918215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.918232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.918244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.918271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.918285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.918304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.918317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.918329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.918341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.918353 | orchestrator | 2026-04-13 00:56:19.918365 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-13 00:56:19.918377 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:02.476) 0:00:21.422 ********** 2026-04-13 00:56:19.918388 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.918400 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.918428 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.918450 | orchestrator | 2026-04-13 00:56:19.918461 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-13 00:56:19.918473 | orchestrator | Monday 13 April 2026 00:49:48 +0000 (0:00:01.722) 0:00:23.144 ********** 2026-04-13 00:56:19.918484 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-13 00:56:19.918496 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-13 00:56:19.918507 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-13 00:56:19.918518 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-13 00:56:19.918530 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-13 00:56:19.918541 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-13 00:56:19.918552 | orchestrator | 2026-04-13 00:56:19.918563 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-13 00:56:19.918575 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:03.807) 0:00:26.952 ********** 2026-04-13 00:56:19.918591 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.918603 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.918614 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.918625 | orchestrator | 2026-04-13 00:56:19.918637 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-13 00:56:19.918648 | orchestrator | Monday 13 April 2026 00:49:53 +0000 (0:00:01.367) 0:00:28.320 ********** 2026-04-13 00:56:19.918659 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.918671 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.918682 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.918693 | orchestrator | 2026-04-13 00:56:19.918891 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-13 00:56:19.918905 | orchestrator | Monday 13 April 2026 00:49:55 +0000 (0:00:02.495) 0:00:30.815 ********** 2026-04-13 00:56:19.918927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.918959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.918981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.919004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:56:19.919036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.919058 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.919087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.919111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.919133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.919163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:56:19.919176 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.919188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.919207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.919219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:56:19.919231 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.919242 | orchestrator | 2026-04-13 00:56:19.919253 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-13 00:56:19.919269 | orchestrator | Monday 13 April 2026 00:49:56 +0000 (0:00:00.718) 0:00:31.534 ********** 2026-04-13 00:56:19.919281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.919353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:56:19.919370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.919602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:56:19.919633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.919646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad', '__omit_place_holder__1071f1b4b1f1b7385fe35d2214a34032964cbbad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:56:19.919657 | orchestrator | 2026-04-13 00:56:19.919669 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-13 00:56:19.919680 | orchestrator | Monday 13 April 2026 00:50:00 +0000 (0:00:03.930) 0:00:35.465 ********** 2026-04-13 00:56:19.919760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.919992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.920003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.920018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.920030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.920065 | orchestrator | 2026-04-13 00:56:19.920076 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-13 00:56:19.920087 | orchestrator | Monday 13 April 2026 00:50:04 +0000 (0:00:03.958) 0:00:39.423 ********** 2026-04-13 00:56:19.920105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-13 00:56:19.920122 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-13 00:56:19.920139 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-13 00:56:19.920155 | orchestrator | 2026-04-13 00:56:19.920173 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-13 00:56:19.920189 | orchestrator | Monday 13 April 2026 00:50:06 +0000 (0:00:01.813) 0:00:41.236 ********** 2026-04-13 00:56:19.920217 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-13 00:56:19.920236 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-13 00:56:19.920254 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-13 00:56:19.920273 | orchestrator | 2026-04-13 00:56:19.920331 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-13 00:56:19.920353 | orchestrator | Monday 13 April 2026 00:50:12 +0000 (0:00:06.115) 0:00:47.352 ********** 2026-04-13 00:56:19.920372 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.920405 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.920471 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.920482 | orchestrator | 2026-04-13 00:56:19.920492 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-13 00:56:19.920502 | orchestrator | Monday 13 April 2026 00:50:14 +0000 (0:00:01.602) 0:00:48.954 ********** 2026-04-13 00:56:19.920513 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-13 00:56:19.920524 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-13 00:56:19.920534 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-13 00:56:19.920544 | orchestrator | 2026-04-13 00:56:19.920555 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-13 00:56:19.920565 | orchestrator | Monday 13 April 2026 00:50:16 +0000 (0:00:02.568) 0:00:51.522 ********** 2026-04-13 00:56:19.920575 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-13 00:56:19.920585 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-13 00:56:19.920595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-13 00:56:19.920606 | orchestrator | 2026-04-13 00:56:19.920616 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-13 00:56:19.920626 | orchestrator | Monday 13 April 2026 00:50:19 +0000 (0:00:02.628) 0:00:54.150 ********** 2026-04-13 00:56:19.920636 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-13 00:56:19.920646 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-13 00:56:19.920789 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-13 00:56:19.920802 | orchestrator | 2026-04-13 00:56:19.920812 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-13 00:56:19.920822 | orchestrator | Monday 13 April 2026 00:50:21 +0000 (0:00:02.038) 0:00:56.189 ********** 2026-04-13 00:56:19.920832 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-13 00:56:19.920842 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-13 00:56:19.920852 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-13 00:56:19.920917 | orchestrator | 2026-04-13 00:56:19.920927 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-13 00:56:19.920937 | orchestrator | Monday 13 April 2026 00:50:23 +0000 (0:00:02.090) 0:00:58.279 ********** 2026-04-13 00:56:19.920948 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.920958 | orchestrator | 2026-04-13 00:56:19.920968 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-13 00:56:19.920986 | orchestrator | Monday 13 April 2026 00:50:24 +0000 (0:00:00.708) 0:00:58.987 ********** 2026-04-13 00:56:19.920997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.921017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.921036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.921047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.921058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.921069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.921085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.921102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.921113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.921123 | orchestrator | 2026-04-13 00:56:19.921134 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-13 00:56:19.921144 | orchestrator | Monday 13 April 2026 00:50:27 +0000 (0:00:03.786) 0:01:02.774 ********** 2026-04-13 00:56:19.921160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921193 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.921203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921256 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.921272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921293 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.921304 | orchestrator | 2026-04-13 00:56:19.921314 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-13 00:56:19.921325 | orchestrator | Monday 13 April 2026 00:50:28 +0000 (0:00:00.554) 0:01:03.328 ********** 2026-04-13 00:56:19.921335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921440 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.921462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921515 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.921525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921624 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.921634 | orchestrator | 2026-04-13 00:56:19.921644 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-13 00:56:19.921654 | orchestrator | Monday 13 April 2026 00:50:29 +0000 (0:00:01.153) 0:01:04.481 ********** 2026-04-13 00:56:19.921665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921704 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.921714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921752 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.921762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921799 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.921810 | orchestrator | 2026-04-13 00:56:19.921820 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-13 00:56:19.921830 | orchestrator | Monday 13 April 2026 00:50:30 +0000 (0:00:00.613) 0:01:05.095 ********** 2026-04-13 00:56:19.921840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921908 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.921922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.921954 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.921970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.921981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.921997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922007 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.922062 | orchestrator | 2026-04-13 00:56:19.922073 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-13 00:56:19.922083 | orchestrator | Monday 13 April 2026 00:50:31 +0000 (0:00:00.811) 0:01:05.907 ********** 2026-04-13 00:56:19.922094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922132 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.922149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922192 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.922211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922273 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.922291 | orchestrator | 2026-04-13 00:56:19.922463 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-13 00:56:19.922479 | orchestrator | Monday 13 April 2026 00:50:32 +0000 (0:00:00.967) 0:01:06.874 ********** 2026-04-13 00:56:19.922490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922541 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.922551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922589 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.922600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922643 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.922653 | orchestrator | 2026-04-13 00:56:19.922663 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-13 00:56:19.922814 | orchestrator | Monday 13 April 2026 00:50:32 +0000 (0:00:00.707) 0:01:07.582 ********** 2026-04-13 00:56:19.922875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922913 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.922923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.922961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.922972 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.922983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.922993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.923003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.923014 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.923024 | orchestrator | 2026-04-13 00:56:19.923034 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-13 00:56:19.923044 | orchestrator | Monday 13 April 2026 00:50:33 +0000 (0:00:00.915) 0:01:08.498 ********** 2026-04-13 00:56:19.923059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.923070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.923086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.923096 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.923112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.923124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.923134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.923145 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.923155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:56:19.923216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:56:19.923237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:56:19.923248 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.923258 | orchestrator | 2026-04-13 00:56:19.923268 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-13 00:56:19.923278 | orchestrator | Monday 13 April 2026 00:50:35 +0000 (0:00:01.424) 0:01:09.922 ********** 2026-04-13 00:56:19.923289 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-13 00:56:19.923299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-13 00:56:19.923315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-13 00:56:19.923326 | orchestrator | 2026-04-13 00:56:19.923336 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-13 00:56:19.923385 | orchestrator | Monday 13 April 2026 00:50:36 +0000 (0:00:01.612) 0:01:11.535 ********** 2026-04-13 00:56:19.923396 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-13 00:56:19.923444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-13 00:56:19.923457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-13 00:56:19.923468 | orchestrator | 2026-04-13 00:56:19.923478 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-13 00:56:19.923488 | orchestrator | Monday 13 April 2026 00:50:38 +0000 (0:00:01.556) 0:01:13.092 ********** 2026-04-13 00:56:19.923555 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 00:56:19.923567 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 00:56:19.923577 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 00:56:19.923587 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.923597 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 00:56:19.923607 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 00:56:19.923617 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.923627 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 00:56:19.923637 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.923647 | orchestrator | 2026-04-13 00:56:19.923658 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-13 00:56:19.923668 | orchestrator | Monday 13 April 2026 00:50:40 +0000 (0:00:02.229) 0:01:15.321 ********** 2026-04-13 00:56:19.923679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.923712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.923723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:56:19.923740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.923752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.923762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:56:19.923773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.923784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.923803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:56:19.923812 | orchestrator | 2026-04-13 00:56:19.923820 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-13 00:56:19.923829 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:04.516) 0:01:19.838 ********** 2026-04-13 00:56:19.923837 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.923845 | orchestrator | 2026-04-13 00:56:19.923854 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-13 00:56:19.923862 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:00.797) 0:01:20.636 ********** 2026-04-13 00:56:19.923871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-13 00:56:19.923887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-13 00:56:19.923896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.923959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.923976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.923985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.923993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-13 00:56:19.926146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.926176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926207 | orchestrator | 2026-04-13 00:56:19.926216 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-13 00:56:19.926224 | orchestrator | Monday 13 April 2026 00:50:51 +0000 (0:00:05.829) 0:01:26.465 ********** 2026-04-13 00:56:19.926233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-13 00:56:19.926250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.926259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926281 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.926290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-13 00:56:19.926302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.926310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-13 00:56:19.926333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926342 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.926351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.926364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926384 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.926393 | orchestrator | 2026-04-13 00:56:19.926401 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-13 00:56:19.926456 | orchestrator | Monday 13 April 2026 00:50:52 +0000 (0:00:01.111) 0:01:27.577 ********** 2026-04-13 00:56:19.926467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-13 00:56:19.926476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-13 00:56:19.926485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-13 00:56:19.926494 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.926502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-13 00:56:19.926511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-13 00:56:19.926519 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.926527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-13 00:56:19.926536 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.926544 | orchestrator | 2026-04-13 00:56:19.926558 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-13 00:56:19.926568 | orchestrator | Monday 13 April 2026 00:50:53 +0000 (0:00:01.144) 0:01:28.721 ********** 2026-04-13 00:56:19.926582 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.926595 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.926610 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.926634 | orchestrator | 2026-04-13 00:56:19.926647 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-13 00:56:19.926656 | orchestrator | Monday 13 April 2026 00:50:55 +0000 (0:00:02.098) 0:01:30.820 ********** 2026-04-13 00:56:19.926664 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.926672 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.926680 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.926688 | orchestrator | 2026-04-13 00:56:19.926696 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-13 00:56:19.926705 | orchestrator | Monday 13 April 2026 00:51:00 +0000 (0:00:04.151) 0:01:34.971 ********** 2026-04-13 00:56:19.926713 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.926721 | orchestrator | 2026-04-13 00:56:19.926729 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-13 00:56:19.926737 | orchestrator | Monday 13 April 2026 00:51:01 +0000 (0:00:00.975) 0:01:35.947 ********** 2026-04-13 00:56:19.926746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.926760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.926801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.926831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926849 | orchestrator | 2026-04-13 00:56:19.926857 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-13 00:56:19.926865 | orchestrator | Monday 13 April 2026 00:51:06 +0000 (0:00:04.933) 0:01:40.880 ********** 2026-04-13 00:56:19.926879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.926892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926909 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.926921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.926930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926952 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.926965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.926974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.926991 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.926999 | orchestrator | 2026-04-13 00:56:19.927008 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-13 00:56:19.927016 | orchestrator | Monday 13 April 2026 00:51:07 +0000 (0:00:01.175) 0:01:42.056 ********** 2026-04-13 00:56:19.927025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-13 00:56:19.927036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-13 00:56:19.927045 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.927054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-13 00:56:19.927062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-13 00:56:19.927070 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.927083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-13 00:56:19.927092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-13 00:56:19.927100 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.927108 | orchestrator | 2026-04-13 00:56:19.927116 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-13 00:56:19.927124 | orchestrator | Monday 13 April 2026 00:51:08 +0000 (0:00:01.067) 0:01:43.124 ********** 2026-04-13 00:56:19.927133 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.927141 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.927149 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.927157 | orchestrator | 2026-04-13 00:56:19.927165 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-13 00:56:19.927174 | orchestrator | Monday 13 April 2026 00:51:10 +0000 (0:00:01.830) 0:01:44.955 ********** 2026-04-13 00:56:19.927182 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.927190 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.927198 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.927206 | orchestrator | 2026-04-13 00:56:19.927219 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-13 00:56:19.927227 | orchestrator | Monday 13 April 2026 00:51:12 +0000 (0:00:02.423) 0:01:47.378 ********** 2026-04-13 00:56:19.927236 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.927244 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.927252 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.927260 | orchestrator | 2026-04-13 00:56:19.927268 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-13 00:56:19.927276 | orchestrator | Monday 13 April 2026 00:51:12 +0000 (0:00:00.317) 0:01:47.695 ********** 2026-04-13 00:56:19.927285 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.927296 | orchestrator | 2026-04-13 00:56:19.927309 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-13 00:56:19.927321 | orchestrator | Monday 13 April 2026 00:51:13 +0000 (0:00:00.975) 0:01:48.671 ********** 2026-04-13 00:56:19.927335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-13 00:56:19.927351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-13 00:56:19.927380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-13 00:56:19.927462 | orchestrator | 2026-04-13 00:56:19.927475 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-13 00:56:19.927484 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:04.453) 0:01:53.125 ********** 2026-04-13 00:56:19.927500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-13 00:56:19.927509 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.927518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-13 00:56:19.927527 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.927536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-13 00:56:19.927544 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.927559 | orchestrator | 2026-04-13 00:56:19.927568 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-13 00:56:19.927576 | orchestrator | Monday 13 April 2026 00:51:21 +0000 (0:00:03.460) 0:01:56.586 ********** 2026-04-13 00:56:19.927585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:56:19.927600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:56:19.927609 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.927618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:56:19.927627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:56:19.927636 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.927651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:56:19.927665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:56:19.927678 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.927692 | orchestrator | 2026-04-13 00:56:19.927706 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-13 00:56:19.927721 | orchestrator | Monday 13 April 2026 00:51:24 +0000 (0:00:02.565) 0:01:59.151 ********** 2026-04-13 00:56:19.927734 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.927747 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.927760 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.927773 | orchestrator | 2026-04-13 00:56:19.927787 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-13 00:56:19.927800 | orchestrator | Monday 13 April 2026 00:51:24 +0000 (0:00:00.410) 0:01:59.562 ********** 2026-04-13 00:56:19.927813 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.927827 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.927840 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.927854 | orchestrator | 2026-04-13 00:56:19.927878 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-13 00:56:19.927893 | orchestrator | Monday 13 April 2026 00:51:26 +0000 (0:00:01.441) 0:02:01.004 ********** 2026-04-13 00:56:19.927902 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.927910 | orchestrator | 2026-04-13 00:56:19.927918 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-13 00:56:19.927927 | orchestrator | Monday 13 April 2026 00:51:27 +0000 (0:00:00.876) 0:02:01.880 ********** 2026-04-13 00:56:19.927952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.927962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.927971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.928033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.928076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928110 | orchestrator | 2026-04-13 00:56:19.928119 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-13 00:56:19.928127 | orchestrator | Monday 13 April 2026 00:51:31 +0000 (0:00:04.204) 0:02:06.085 ********** 2026-04-13 00:56:19.928139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.928148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.928176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928205 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.928213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928230 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.928243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.928257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928286 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.928294 | orchestrator | 2026-04-13 00:56:19.928303 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-13 00:56:19.928311 | orchestrator | Monday 13 April 2026 00:51:31 +0000 (0:00:00.733) 0:02:06.818 ********** 2026-04-13 00:56:19.928320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-13 00:56:19.928329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-13 00:56:19.928338 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.928346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-13 00:56:19.928354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-13 00:56:19.928367 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.928380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-13 00:56:19.928389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-13 00:56:19.928397 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.928406 | orchestrator | 2026-04-13 00:56:19.928433 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-13 00:56:19.928442 | orchestrator | Monday 13 April 2026 00:51:33 +0000 (0:00:01.156) 0:02:07.974 ********** 2026-04-13 00:56:19.928450 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.928458 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.928467 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.928475 | orchestrator | 2026-04-13 00:56:19.928484 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-13 00:56:19.928492 | orchestrator | Monday 13 April 2026 00:51:34 +0000 (0:00:01.320) 0:02:09.295 ********** 2026-04-13 00:56:19.928500 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.928509 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.928517 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.928525 | orchestrator | 2026-04-13 00:56:19.928533 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-13 00:56:19.928542 | orchestrator | Monday 13 April 2026 00:51:36 +0000 (0:00:02.111) 0:02:11.406 ********** 2026-04-13 00:56:19.928550 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.928558 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.928566 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.928575 | orchestrator | 2026-04-13 00:56:19.928583 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-13 00:56:19.928591 | orchestrator | Monday 13 April 2026 00:51:36 +0000 (0:00:00.333) 0:02:11.740 ********** 2026-04-13 00:56:19.928599 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.928608 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.928616 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.928624 | orchestrator | 2026-04-13 00:56:19.928632 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-13 00:56:19.928641 | orchestrator | Monday 13 April 2026 00:51:37 +0000 (0:00:00.302) 0:02:12.043 ********** 2026-04-13 00:56:19.928649 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.928658 | orchestrator | 2026-04-13 00:56:19.928672 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-13 00:56:19.928685 | orchestrator | Monday 13 April 2026 00:51:38 +0000 (0:00:01.017) 0:02:13.060 ********** 2026-04-13 00:56:19.928706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 00:56:19.928717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:56:19.928731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.928785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 00:56:19.928798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:56:19.930160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 00:56:19.930358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:56:19.930372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930472 | orchestrator | 2026-04-13 00:56:19.930486 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-13 00:56:19.930497 | orchestrator | Monday 13 April 2026 00:51:42 +0000 (0:00:04.269) 0:02:17.329 ********** 2026-04-13 00:56:19.930510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 00:56:19.930533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:56:19.930546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930623 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.930642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 00:56:19.930655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:56:19.930667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930738 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.930758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 00:56:19.930771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:56:19.930783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.930861 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.930873 | orchestrator | 2026-04-13 00:56:19.930884 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-13 00:56:19.930896 | orchestrator | Monday 13 April 2026 00:51:43 +0000 (0:00:00.882) 0:02:18.211 ********** 2026-04-13 00:56:19.930908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-13 00:56:19.930920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-13 00:56:19.930932 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.930943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-13 00:56:19.930954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-13 00:56:19.931008 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.931094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-13 00:56:19.931108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-13 00:56:19.931119 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.931131 | orchestrator | 2026-04-13 00:56:19.931142 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-13 00:56:19.931154 | orchestrator | Monday 13 April 2026 00:51:44 +0000 (0:00:01.471) 0:02:19.683 ********** 2026-04-13 00:56:19.931165 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.931176 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.931187 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.931198 | orchestrator | 2026-04-13 00:56:19.931209 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-13 00:56:19.931237 | orchestrator | Monday 13 April 2026 00:51:46 +0000 (0:00:01.307) 0:02:20.991 ********** 2026-04-13 00:56:19.931258 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.931270 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.931281 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.931292 | orchestrator | 2026-04-13 00:56:19.931303 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-13 00:56:19.931320 | orchestrator | Monday 13 April 2026 00:51:48 +0000 (0:00:02.081) 0:02:23.072 ********** 2026-04-13 00:56:19.931332 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.931343 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.931354 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.931365 | orchestrator | 2026-04-13 00:56:19.931376 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-13 00:56:19.931387 | orchestrator | Monday 13 April 2026 00:51:48 +0000 (0:00:00.334) 0:02:23.406 ********** 2026-04-13 00:56:19.931399 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.931431 | orchestrator | 2026-04-13 00:56:19.931451 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-13 00:56:19.931471 | orchestrator | Monday 13 April 2026 00:51:49 +0000 (0:00:01.010) 0:02:24.417 ********** 2026-04-13 00:56:19.931510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 00:56:19.931542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.931563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 00:56:19.931578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.931602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 00:56:19.931625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.931644 | orchestrator | 2026-04-13 00:56:19.931656 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-13 00:56:19.931667 | orchestrator | Monday 13 April 2026 00:51:53 +0000 (0:00:04.114) 0:02:28.532 ********** 2026-04-13 00:56:19.931684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 00:56:19.931705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.931724 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.931745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 00:56:19.931765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.931784 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.931797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 00:56:19.931820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.931840 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.931851 | orchestrator | 2026-04-13 00:56:19.931863 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-13 00:56:19.931874 | orchestrator | Monday 13 April 2026 00:51:56 +0000 (0:00:02.884) 0:02:31.416 ********** 2026-04-13 00:56:19.931886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:56:19.931899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:56:19.931910 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.931922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:56:19.931934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:56:19.931946 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.931970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:56:19.931983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:56:19.931995 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.932006 | orchestrator | 2026-04-13 00:56:19.932018 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-13 00:56:19.932037 | orchestrator | Monday 13 April 2026 00:52:00 +0000 (0:00:03.514) 0:02:34.931 ********** 2026-04-13 00:56:19.932049 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.932060 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.932071 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.932082 | orchestrator | 2026-04-13 00:56:19.932094 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-13 00:56:19.932105 | orchestrator | Monday 13 April 2026 00:52:01 +0000 (0:00:01.348) 0:02:36.280 ********** 2026-04-13 00:56:19.932116 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.932128 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.932145 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.932156 | orchestrator | 2026-04-13 00:56:19.932168 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-13 00:56:19.932179 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:01.998) 0:02:38.278 ********** 2026-04-13 00:56:19.932190 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.932202 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.932213 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.932224 | orchestrator | 2026-04-13 00:56:19.932235 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-13 00:56:19.932247 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:00.317) 0:02:38.596 ********** 2026-04-13 00:56:19.932258 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.932269 | orchestrator | 2026-04-13 00:56:19.932280 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-13 00:56:19.932291 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:01.073) 0:02:39.670 ********** 2026-04-13 00:56:19.932304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 00:56:19.932317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 00:56:19.932334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 00:56:19.932346 | orchestrator | 2026-04-13 00:56:19.932358 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-13 00:56:19.932369 | orchestrator | Monday 13 April 2026 00:52:08 +0000 (0:00:03.520) 0:02:43.190 ********** 2026-04-13 00:56:19.932387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 00:56:19.932399 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.932475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 00:56:19.932490 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.932503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 00:56:19.932515 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.932526 | orchestrator | 2026-04-13 00:56:19.932537 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-13 00:56:19.932548 | orchestrator | Monday 13 April 2026 00:52:08 +0000 (0:00:00.427) 0:02:43.618 ********** 2026-04-13 00:56:19.932559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-13 00:56:19.932571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-13 00:56:19.932582 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.932593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-13 00:56:19.932604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-13 00:56:19.932615 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.932627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-13 00:56:19.932638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-13 00:56:19.932656 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.932668 | orchestrator | 2026-04-13 00:56:19.932679 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-13 00:56:19.932695 | orchestrator | Monday 13 April 2026 00:52:09 +0000 (0:00:00.997) 0:02:44.616 ********** 2026-04-13 00:56:19.932706 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.932718 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.932729 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.932740 | orchestrator | 2026-04-13 00:56:19.932751 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-13 00:56:19.932763 | orchestrator | Monday 13 April 2026 00:52:11 +0000 (0:00:01.492) 0:02:46.108 ********** 2026-04-13 00:56:19.932779 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.932799 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.932816 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.932844 | orchestrator | 2026-04-13 00:56:19.932867 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-13 00:56:19.932887 | orchestrator | Monday 13 April 2026 00:52:13 +0000 (0:00:02.312) 0:02:48.421 ********** 2026-04-13 00:56:19.932906 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.932925 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.932945 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.932965 | orchestrator | 2026-04-13 00:56:19.932985 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-13 00:56:19.933005 | orchestrator | Monday 13 April 2026 00:52:13 +0000 (0:00:00.346) 0:02:48.768 ********** 2026-04-13 00:56:19.933024 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.933046 | orchestrator | 2026-04-13 00:56:19.933066 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-13 00:56:19.933088 | orchestrator | Monday 13 April 2026 00:52:15 +0000 (0:00:01.107) 0:02:49.875 ********** 2026-04-13 00:56:19.933126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:56:19.933172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:56:19.933210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:56:19.933244 | orchestrator | 2026-04-13 00:56:19.933266 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-13 00:56:19.933286 | orchestrator | Monday 13 April 2026 00:52:18 +0000 (0:00:03.589) 0:02:53.465 ********** 2026-04-13 00:56:19.933317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:56:19.933340 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.933396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:56:19.933495 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.933756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:56:19.933788 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.933805 | orchestrator | 2026-04-13 00:56:19.933822 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-13 00:56:19.933839 | orchestrator | Monday 13 April 2026 00:52:19 +0000 (0:00:00.750) 0:02:54.215 ********** 2026-04-13 00:56:19.933857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-13 00:56:19.933876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:56:19.933961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-13 00:56:19.933983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-13 00:56:19.934003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:56:19.934079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:56:19.934100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-13 00:56:19.934118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-13 00:56:19.934137 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.934157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:56:19.934176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-13 00:56:19.934194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-13 00:56:19.934212 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.934243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:56:19.934263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-13 00:56:19.934282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:56:19.934311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-13 00:56:19.934329 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.934349 | orchestrator | 2026-04-13 00:56:19.934369 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-13 00:56:19.934386 | orchestrator | Monday 13 April 2026 00:52:20 +0000 (0:00:01.070) 0:02:55.286 ********** 2026-04-13 00:56:19.934399 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.934433 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.934452 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.934470 | orchestrator | 2026-04-13 00:56:19.934488 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-13 00:56:19.934504 | orchestrator | Monday 13 April 2026 00:52:22 +0000 (0:00:01.610) 0:02:56.897 ********** 2026-04-13 00:56:19.934516 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.934527 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.934541 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.934558 | orchestrator | 2026-04-13 00:56:19.934576 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-13 00:56:19.934594 | orchestrator | Monday 13 April 2026 00:52:24 +0000 (0:00:02.112) 0:02:59.010 ********** 2026-04-13 00:56:19.934612 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.934622 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.934632 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.934641 | orchestrator | 2026-04-13 00:56:19.934651 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-13 00:56:19.934662 | orchestrator | Monday 13 April 2026 00:52:24 +0000 (0:00:00.332) 0:02:59.342 ********** 2026-04-13 00:56:19.934671 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.934681 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.934691 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.934701 | orchestrator | 2026-04-13 00:56:19.934711 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-13 00:56:19.934721 | orchestrator | Monday 13 April 2026 00:52:24 +0000 (0:00:00.335) 0:02:59.677 ********** 2026-04-13 00:56:19.934737 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.934747 | orchestrator | 2026-04-13 00:56:19.934758 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-13 00:56:19.934776 | orchestrator | Monday 13 April 2026 00:52:25 +0000 (0:00:01.178) 0:03:00.855 ********** 2026-04-13 00:56:19.934794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 00:56:19.934825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:19.934878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:19.934900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 00:56:19.934913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:19.934929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:19.934940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 00:56:19.934965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:19.934977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:19.934987 | orchestrator | 2026-04-13 00:56:19.934998 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-13 00:56:19.935008 | orchestrator | Monday 13 April 2026 00:52:29 +0000 (0:00:03.978) 0:03:04.834 ********** 2026-04-13 00:56:19.935020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 00:56:19.935034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:19.935046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:19.935066 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.935084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 00:56:19.935096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:19.935106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:19.935117 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.935132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 00:56:19.935143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:19.935159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:19.935170 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.935180 | orchestrator | 2026-04-13 00:56:19.935195 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-13 00:56:19.935206 | orchestrator | Monday 13 April 2026 00:52:30 +0000 (0:00:00.651) 0:03:05.486 ********** 2026-04-13 00:56:19.935217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-13 00:56:19.935228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-13 00:56:19.935239 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.935249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-13 00:56:19.935260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-13 00:56:19.935271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-13 00:56:19.935281 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.935292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-13 00:56:19.935302 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.935312 | orchestrator | 2026-04-13 00:56:19.935322 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-13 00:56:19.935333 | orchestrator | Monday 13 April 2026 00:52:31 +0000 (0:00:01.092) 0:03:06.578 ********** 2026-04-13 00:56:19.935348 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.935365 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.935383 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.935394 | orchestrator | 2026-04-13 00:56:19.935404 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-13 00:56:19.935472 | orchestrator | Monday 13 April 2026 00:52:33 +0000 (0:00:01.348) 0:03:07.927 ********** 2026-04-13 00:56:19.935484 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.935494 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.935504 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.935522 | orchestrator | 2026-04-13 00:56:19.935537 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-13 00:56:19.935547 | orchestrator | Monday 13 April 2026 00:52:35 +0000 (0:00:02.166) 0:03:10.093 ********** 2026-04-13 00:56:19.935557 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.935567 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.935575 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.935584 | orchestrator | 2026-04-13 00:56:19.935592 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-13 00:56:19.935601 | orchestrator | Monday 13 April 2026 00:52:35 +0000 (0:00:00.417) 0:03:10.511 ********** 2026-04-13 00:56:19.935609 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.935617 | orchestrator | 2026-04-13 00:56:19.935625 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-13 00:56:19.935633 | orchestrator | Monday 13 April 2026 00:52:36 +0000 (0:00:01.276) 0:03:11.788 ********** 2026-04-13 00:56:19.935642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 00:56:19.935659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.935669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 00:56:19.935678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.935698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 00:56:19.935708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.935716 | orchestrator | 2026-04-13 00:56:19.935724 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-13 00:56:19.935733 | orchestrator | Monday 13 April 2026 00:52:41 +0000 (0:00:04.226) 0:03:16.015 ********** 2026-04-13 00:56:19.935747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 00:56:19.935756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.935768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 00:56:19.935782 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.935791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.935799 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.935812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 00:56:19.935821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.935829 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.935838 | orchestrator | 2026-04-13 00:56:19.935846 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-13 00:56:19.935855 | orchestrator | Monday 13 April 2026 00:52:42 +0000 (0:00:01.036) 0:03:17.052 ********** 2026-04-13 00:56:19.935863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-13 00:56:19.935872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-13 00:56:19.935885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-13 00:56:19.935894 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.935902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-13 00:56:19.935910 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.935919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-13 00:56:19.935927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-13 00:56:19.935936 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.935944 | orchestrator | 2026-04-13 00:56:19.935952 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-13 00:56:19.935964 | orchestrator | Monday 13 April 2026 00:52:43 +0000 (0:00:01.101) 0:03:18.153 ********** 2026-04-13 00:56:19.935972 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.935980 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.935988 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.935997 | orchestrator | 2026-04-13 00:56:19.936005 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-13 00:56:19.936013 | orchestrator | Monday 13 April 2026 00:52:44 +0000 (0:00:01.239) 0:03:19.393 ********** 2026-04-13 00:56:19.936021 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.936030 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.936038 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.936046 | orchestrator | 2026-04-13 00:56:19.936054 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-13 00:56:19.936062 | orchestrator | Monday 13 April 2026 00:52:46 +0000 (0:00:02.066) 0:03:21.460 ********** 2026-04-13 00:56:19.936070 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.936079 | orchestrator | 2026-04-13 00:56:19.936087 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-13 00:56:19.936095 | orchestrator | Monday 13 April 2026 00:52:47 +0000 (0:00:01.035) 0:03:22.495 ********** 2026-04-13 00:56:19.936108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-13 00:56:19.936117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-13 00:56:19.936160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-13 00:56:19.936206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936235 | orchestrator | 2026-04-13 00:56:19.936243 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-13 00:56:19.936251 | orchestrator | Monday 13 April 2026 00:52:51 +0000 (0:00:04.312) 0:03:26.808 ********** 2026-04-13 00:56:19.936264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-13 00:56:19.936278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936304 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.936316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-13 00:56:19.936325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936360 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.936369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-13 00:56:19.936377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.936406 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.936431 | orchestrator | 2026-04-13 00:56:19.936439 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-13 00:56:19.936448 | orchestrator | Monday 13 April 2026 00:52:52 +0000 (0:00:00.898) 0:03:27.706 ********** 2026-04-13 00:56:19.936456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-13 00:56:19.936471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-13 00:56:19.936479 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.936492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-13 00:56:19.936500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-13 00:56:19.936509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-13 00:56:19.936517 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.936526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-13 00:56:19.936534 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.936542 | orchestrator | 2026-04-13 00:56:19.936550 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-13 00:56:19.936558 | orchestrator | Monday 13 April 2026 00:52:53 +0000 (0:00:00.930) 0:03:28.637 ********** 2026-04-13 00:56:19.936567 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.936575 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.936583 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.936591 | orchestrator | 2026-04-13 00:56:19.936600 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-13 00:56:19.936608 | orchestrator | Monday 13 April 2026 00:52:55 +0000 (0:00:01.403) 0:03:30.040 ********** 2026-04-13 00:56:19.936616 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.936624 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.936632 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.936640 | orchestrator | 2026-04-13 00:56:19.936649 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-13 00:56:19.936657 | orchestrator | Monday 13 April 2026 00:52:57 +0000 (0:00:02.153) 0:03:32.193 ********** 2026-04-13 00:56:19.936665 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.936673 | orchestrator | 2026-04-13 00:56:19.936681 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-13 00:56:19.936689 | orchestrator | Monday 13 April 2026 00:52:58 +0000 (0:00:01.341) 0:03:33.535 ********** 2026-04-13 00:56:19.936698 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:56:19.936706 | orchestrator | 2026-04-13 00:56:19.936714 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-13 00:56:19.936723 | orchestrator | Monday 13 April 2026 00:53:02 +0000 (0:00:03.420) 0:03:36.955 ********** 2026-04-13 00:56:19.936736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:56:19.936758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:56:19.936768 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.936777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:56:19.936789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:56:19.936803 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.936817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:56:19.936827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:56:19.936835 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.936844 | orchestrator | 2026-04-13 00:56:19.936852 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-13 00:56:19.936860 | orchestrator | Monday 13 April 2026 00:53:04 +0000 (0:00:02.489) 0:03:39.444 ********** 2026-04-13 00:56:19.936872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:56:19.936886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:56:19.936895 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.937049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:56:19.937064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:56:19.937073 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.937087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:56:19.937141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:56:19.937152 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.937161 | orchestrator | 2026-04-13 00:56:19.937169 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-13 00:56:19.937178 | orchestrator | Monday 13 April 2026 00:53:07 +0000 (0:00:02.967) 0:03:42.412 ********** 2026-04-13 00:56:19.937186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:56:19.937196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:56:19.937204 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.937213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:56:19.937230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:56:19.937239 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.937248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:56:19.937294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:56:19.937304 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.937313 | orchestrator | 2026-04-13 00:56:19.937321 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-13 00:56:19.937330 | orchestrator | Monday 13 April 2026 00:53:09 +0000 (0:00:02.392) 0:03:44.804 ********** 2026-04-13 00:56:19.937338 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.937346 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.937355 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.937363 | orchestrator | 2026-04-13 00:56:19.937371 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-13 00:56:19.937379 | orchestrator | Monday 13 April 2026 00:53:12 +0000 (0:00:02.140) 0:03:46.945 ********** 2026-04-13 00:56:19.937387 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.937395 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.937403 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.937429 | orchestrator | 2026-04-13 00:56:19.937438 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-13 00:56:19.937447 | orchestrator | Monday 13 April 2026 00:53:13 +0000 (0:00:01.905) 0:03:48.850 ********** 2026-04-13 00:56:19.937455 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.937463 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.937472 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.937480 | orchestrator | 2026-04-13 00:56:19.937488 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-13 00:56:19.937496 | orchestrator | Monday 13 April 2026 00:53:14 +0000 (0:00:00.318) 0:03:49.169 ********** 2026-04-13 00:56:19.937504 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.937518 | orchestrator | 2026-04-13 00:56:19.937527 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-13 00:56:19.937535 | orchestrator | Monday 13 April 2026 00:53:15 +0000 (0:00:01.380) 0:03:50.550 ********** 2026-04-13 00:56:19.937544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:56:19.937557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:56:19.937566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:56:19.937575 | orchestrator | 2026-04-13 00:56:19.937609 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-13 00:56:19.937618 | orchestrator | Monday 13 April 2026 00:53:17 +0000 (0:00:01.506) 0:03:52.056 ********** 2026-04-13 00:56:19.937686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:56:19.937699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:56:19.937715 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.937737 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.937746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:56:19.937754 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.937763 | orchestrator | 2026-04-13 00:56:19.937771 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-13 00:56:19.937779 | orchestrator | Monday 13 April 2026 00:53:17 +0000 (0:00:00.450) 0:03:52.507 ********** 2026-04-13 00:56:19.937798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-13 00:56:19.937813 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.937827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-13 00:56:19.937841 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.937856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-13 00:56:19.937870 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.937884 | orchestrator | 2026-04-13 00:56:19.937898 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-13 00:56:19.937912 | orchestrator | Monday 13 April 2026 00:53:18 +0000 (0:00:00.972) 0:03:53.480 ********** 2026-04-13 00:56:19.937927 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.937937 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.937946 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.937954 | orchestrator | 2026-04-13 00:56:19.937962 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-13 00:56:19.937976 | orchestrator | Monday 13 April 2026 00:53:19 +0000 (0:00:00.422) 0:03:53.903 ********** 2026-04-13 00:56:19.937997 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.938042 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.938061 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.938075 | orchestrator | 2026-04-13 00:56:19.938092 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-13 00:56:19.938106 | orchestrator | Monday 13 April 2026 00:53:20 +0000 (0:00:01.281) 0:03:55.184 ********** 2026-04-13 00:56:19.938121 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.938130 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.938138 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.938230 | orchestrator | 2026-04-13 00:56:19.938244 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-13 00:56:19.938252 | orchestrator | Monday 13 April 2026 00:53:20 +0000 (0:00:00.312) 0:03:55.497 ********** 2026-04-13 00:56:19.938260 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.938268 | orchestrator | 2026-04-13 00:56:19.938277 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-13 00:56:19.938285 | orchestrator | Monday 13 April 2026 00:53:22 +0000 (0:00:01.491) 0:03:56.989 ********** 2026-04-13 00:56:19.938293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 00:56:19.938303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-13 00:56:19.938464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.938498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.938513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.938604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 00:56:19.938621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-13 00:56:19.938647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.938654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.938757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-13 00:56:19.938764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.938772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 00:56:19.938842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.938850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.938890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.938972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-13 00:56:19.938979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.938997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.939108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.939173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.939184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.939270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.939280 | orchestrator | 2026-04-13 00:56:19.939287 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-13 00:56:19.939294 | orchestrator | Monday 13 April 2026 00:53:26 +0000 (0:00:04.516) 0:04:01.505 ********** 2026-04-13 00:56:19.939301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 00:56:19.939309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-13 00:56:19.939421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.939470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 00:56:19.939531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 00:56:19.939570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-13 00:56:19.939722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.939732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.939763 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.939770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-13 00:56:19.939778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.939892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.939961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.939984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-13 00:56:19.939999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-13 00:56:19.940036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.940057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.940078 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.940085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:56:19.940113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:56:19.940122 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.940129 | orchestrator | 2026-04-13 00:56:19.940136 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-13 00:56:19.940143 | orchestrator | Monday 13 April 2026 00:53:28 +0000 (0:00:01.999) 0:04:03.504 ********** 2026-04-13 00:56:19.940150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-13 00:56:19.940158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-13 00:56:19.940170 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.940177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-13 00:56:19.940184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-13 00:56:19.940191 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.940198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-13 00:56:19.940205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-13 00:56:19.940211 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.940218 | orchestrator | 2026-04-13 00:56:19.940225 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-13 00:56:19.940232 | orchestrator | Monday 13 April 2026 00:53:30 +0000 (0:00:01.553) 0:04:05.058 ********** 2026-04-13 00:56:19.940239 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.940246 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.940253 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.940260 | orchestrator | 2026-04-13 00:56:19.940276 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-13 00:56:19.940283 | orchestrator | Monday 13 April 2026 00:53:31 +0000 (0:00:01.450) 0:04:06.508 ********** 2026-04-13 00:56:19.940290 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.940297 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.940303 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.940310 | orchestrator | 2026-04-13 00:56:19.940317 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-13 00:56:19.940324 | orchestrator | Monday 13 April 2026 00:53:33 +0000 (0:00:02.193) 0:04:08.702 ********** 2026-04-13 00:56:19.940334 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.940341 | orchestrator | 2026-04-13 00:56:19.940348 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-13 00:56:19.940355 | orchestrator | Monday 13 April 2026 00:53:35 +0000 (0:00:01.508) 0:04:10.211 ********** 2026-04-13 00:56:19.940362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.940391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.940405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.940433 | orchestrator | 2026-04-13 00:56:19.940445 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-13 00:56:19.940457 | orchestrator | Monday 13 April 2026 00:53:38 +0000 (0:00:03.098) 0:04:13.309 ********** 2026-04-13 00:56:19.940469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.940480 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.940490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.940498 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.940528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.940542 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.940549 | orchestrator | 2026-04-13 00:56:19.940556 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-13 00:56:19.940563 | orchestrator | Monday 13 April 2026 00:53:38 +0000 (0:00:00.487) 0:04:13.797 ********** 2026-04-13 00:56:19.940570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-13 00:56:19.940579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-13 00:56:19.940587 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.940596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-13 00:56:19.940604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-13 00:56:19.940612 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.940620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-13 00:56:19.940628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-13 00:56:19.940637 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.940644 | orchestrator | 2026-04-13 00:56:19.940652 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-13 00:56:19.940660 | orchestrator | Monday 13 April 2026 00:53:40 +0000 (0:00:01.114) 0:04:14.912 ********** 2026-04-13 00:56:19.940668 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.940676 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.940683 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.940692 | orchestrator | 2026-04-13 00:56:19.940699 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-13 00:56:19.940707 | orchestrator | Monday 13 April 2026 00:53:41 +0000 (0:00:01.317) 0:04:16.229 ********** 2026-04-13 00:56:19.940715 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.940723 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.940730 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.940738 | orchestrator | 2026-04-13 00:56:19.940746 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-13 00:56:19.940757 | orchestrator | Monday 13 April 2026 00:53:43 +0000 (0:00:02.117) 0:04:18.346 ********** 2026-04-13 00:56:19.940765 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.940773 | orchestrator | 2026-04-13 00:56:19.940781 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-13 00:56:19.940789 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:01.478) 0:04:19.825 ********** 2026-04-13 00:56:19.940799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.940832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.940863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.940912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940927 | orchestrator | 2026-04-13 00:56:19.940934 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-13 00:56:19.940941 | orchestrator | Monday 13 April 2026 00:53:49 +0000 (0:00:04.308) 0:04:24.133 ********** 2026-04-13 00:56:19.940952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.940963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.940998 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.941005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.941013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.941023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.941035 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.941042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.941069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.941077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.941085 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.941092 | orchestrator | 2026-04-13 00:56:19.941099 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-13 00:56:19.941106 | orchestrator | Monday 13 April 2026 00:53:49 +0000 (0:00:00.697) 0:04:24.830 ********** 2026-04-13 00:56:19.941113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941147 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.941159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941209 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.941216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-13 00:56:19.941244 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.941257 | orchestrator | 2026-04-13 00:56:19.941269 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-13 00:56:19.941282 | orchestrator | Monday 13 April 2026 00:53:50 +0000 (0:00:00.952) 0:04:25.783 ********** 2026-04-13 00:56:19.941293 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.941303 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.941314 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.941325 | orchestrator | 2026-04-13 00:56:19.941336 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-13 00:56:19.941348 | orchestrator | Monday 13 April 2026 00:53:52 +0000 (0:00:01.872) 0:04:27.656 ********** 2026-04-13 00:56:19.941359 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.941370 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.941382 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.941394 | orchestrator | 2026-04-13 00:56:19.941403 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-13 00:56:19.941451 | orchestrator | Monday 13 April 2026 00:53:55 +0000 (0:00:02.206) 0:04:29.862 ********** 2026-04-13 00:56:19.941459 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.941466 | orchestrator | 2026-04-13 00:56:19.941473 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-13 00:56:19.941480 | orchestrator | Monday 13 April 2026 00:53:56 +0000 (0:00:01.380) 0:04:31.243 ********** 2026-04-13 00:56:19.941488 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-13 00:56:19.941502 | orchestrator | 2026-04-13 00:56:19.941509 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-13 00:56:19.941516 | orchestrator | Monday 13 April 2026 00:53:57 +0000 (0:00:01.528) 0:04:32.772 ********** 2026-04-13 00:56:19.941524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-13 00:56:19.941532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-13 00:56:19.941543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-13 00:56:19.941551 | orchestrator | 2026-04-13 00:56:19.941558 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-13 00:56:19.941566 | orchestrator | Monday 13 April 2026 00:54:02 +0000 (0:00:04.155) 0:04:36.927 ********** 2026-04-13 00:56:19.941573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941580 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.941588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941623 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.941631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941638 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.941644 | orchestrator | 2026-04-13 00:56:19.941651 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-13 00:56:19.941662 | orchestrator | Monday 13 April 2026 00:54:03 +0000 (0:00:01.442) 0:04:38.370 ********** 2026-04-13 00:56:19.941669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:56:19.941675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:56:19.941682 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.941689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:56:19.941696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:56:19.941702 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.941709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:56:19.941716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:56:19.941722 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.941729 | orchestrator | 2026-04-13 00:56:19.941735 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-13 00:56:19.941745 | orchestrator | Monday 13 April 2026 00:54:05 +0000 (0:00:02.127) 0:04:40.498 ********** 2026-04-13 00:56:19.941751 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.941758 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.941764 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.941771 | orchestrator | 2026-04-13 00:56:19.941777 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-13 00:56:19.941784 | orchestrator | Monday 13 April 2026 00:54:08 +0000 (0:00:02.493) 0:04:42.991 ********** 2026-04-13 00:56:19.941790 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.941797 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.941813 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.941820 | orchestrator | 2026-04-13 00:56:19.941826 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-13 00:56:19.941833 | orchestrator | Monday 13 April 2026 00:54:11 +0000 (0:00:03.275) 0:04:46.266 ********** 2026-04-13 00:56:19.941839 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-13 00:56:19.941846 | orchestrator | 2026-04-13 00:56:19.941852 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-13 00:56:19.941859 | orchestrator | Monday 13 April 2026 00:54:12 +0000 (0:00:01.408) 0:04:47.675 ********** 2026-04-13 00:56:19.941866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941877 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.941904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941912 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.941919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941925 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.941932 | orchestrator | 2026-04-13 00:56:19.941938 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-13 00:56:19.941945 | orchestrator | Monday 13 April 2026 00:54:14 +0000 (0:00:01.538) 0:04:49.214 ********** 2026-04-13 00:56:19.941951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941958 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.941965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941971 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.941981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:56:19.941988 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.941995 | orchestrator | 2026-04-13 00:56:19.942001 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-13 00:56:19.942008 | orchestrator | Monday 13 April 2026 00:54:16 +0000 (0:00:01.758) 0:04:50.973 ********** 2026-04-13 00:56:19.942014 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.942046 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.942052 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.942059 | orchestrator | 2026-04-13 00:56:19.942065 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-13 00:56:19.942076 | orchestrator | Monday 13 April 2026 00:54:17 +0000 (0:00:01.316) 0:04:52.290 ********** 2026-04-13 00:56:19.942082 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.942089 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.942096 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.942102 | orchestrator | 2026-04-13 00:56:19.942109 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-13 00:56:19.942115 | orchestrator | Monday 13 April 2026 00:54:20 +0000 (0:00:02.737) 0:04:55.027 ********** 2026-04-13 00:56:19.942122 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.942128 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.942135 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.942141 | orchestrator | 2026-04-13 00:56:19.942147 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-13 00:56:19.942154 | orchestrator | Monday 13 April 2026 00:54:23 +0000 (0:00:03.405) 0:04:58.432 ********** 2026-04-13 00:56:19.942161 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-13 00:56:19.942167 | orchestrator | 2026-04-13 00:56:19.942194 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-13 00:56:19.942201 | orchestrator | Monday 13 April 2026 00:54:24 +0000 (0:00:00.858) 0:04:59.291 ********** 2026-04-13 00:56:19.942208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:56:19.942215 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.942221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:56:19.942228 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.942234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:56:19.942241 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.942247 | orchestrator | 2026-04-13 00:56:19.942254 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-13 00:56:19.942260 | orchestrator | Monday 13 April 2026 00:54:25 +0000 (0:00:01.466) 0:05:00.757 ********** 2026-04-13 00:56:19.942270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:56:19.942281 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.942288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:56:19.942294 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.942301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:56:19.942307 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.942314 | orchestrator | 2026-04-13 00:56:19.942320 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-13 00:56:19.942326 | orchestrator | Monday 13 April 2026 00:54:27 +0000 (0:00:01.343) 0:05:02.101 ********** 2026-04-13 00:56:19.942333 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.942339 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.942346 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.942352 | orchestrator | 2026-04-13 00:56:19.942359 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-13 00:56:19.942382 | orchestrator | Monday 13 April 2026 00:54:28 +0000 (0:00:01.666) 0:05:03.768 ********** 2026-04-13 00:56:19.942389 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.942396 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.942402 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.942423 | orchestrator | 2026-04-13 00:56:19.942431 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-13 00:56:19.942437 | orchestrator | Monday 13 April 2026 00:54:31 +0000 (0:00:02.805) 0:05:06.573 ********** 2026-04-13 00:56:19.942443 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.942450 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.942456 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.942463 | orchestrator | 2026-04-13 00:56:19.942469 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-13 00:56:19.942475 | orchestrator | Monday 13 April 2026 00:54:35 +0000 (0:00:03.509) 0:05:10.082 ********** 2026-04-13 00:56:19.942482 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.942488 | orchestrator | 2026-04-13 00:56:19.942495 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-13 00:56:19.942501 | orchestrator | Monday 13 April 2026 00:54:36 +0000 (0:00:01.295) 0:05:11.377 ********** 2026-04-13 00:56:19.942508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.942520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:56:19.942530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.942570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.942577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:56:19.942588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.942630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.942638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:56:19.942645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.942668 | orchestrator | 2026-04-13 00:56:19.942680 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-13 00:56:19.942687 | orchestrator | Monday 13 April 2026 00:54:40 +0000 (0:00:04.134) 0:05:15.512 ********** 2026-04-13 00:56:19.942694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.942701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:56:19.942726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.942751 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.942761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.942768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:56:19.942774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.942817 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.942824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.942831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:56:19.942838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:56:19.942869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:56:19.942877 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.942884 | orchestrator | 2026-04-13 00:56:19.942890 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-13 00:56:19.942897 | orchestrator | Monday 13 April 2026 00:54:42 +0000 (0:00:01.417) 0:05:16.930 ********** 2026-04-13 00:56:19.942903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:56:19.942914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:56:19.942921 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.942928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:56:19.942934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:56:19.942941 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.942947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:56:19.942958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:56:19.942969 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.942980 | orchestrator | 2026-04-13 00:56:19.942992 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-13 00:56:19.943003 | orchestrator | Monday 13 April 2026 00:54:43 +0000 (0:00:01.120) 0:05:18.050 ********** 2026-04-13 00:56:19.943070 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.943085 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.943091 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.943098 | orchestrator | 2026-04-13 00:56:19.943104 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-13 00:56:19.943111 | orchestrator | Monday 13 April 2026 00:54:44 +0000 (0:00:01.422) 0:05:19.472 ********** 2026-04-13 00:56:19.943117 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.943123 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.943129 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.943136 | orchestrator | 2026-04-13 00:56:19.943142 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-13 00:56:19.943149 | orchestrator | Monday 13 April 2026 00:54:46 +0000 (0:00:02.350) 0:05:21.823 ********** 2026-04-13 00:56:19.943158 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.943164 | orchestrator | 2026-04-13 00:56:19.943171 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-13 00:56:19.943177 | orchestrator | Monday 13 April 2026 00:54:48 +0000 (0:00:01.722) 0:05:23.545 ********** 2026-04-13 00:56:19.943184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:56:19.943217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:56:19.943231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:56:19.943239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:56:19.943249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:56:19.943274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:56:19.943287 | orchestrator | 2026-04-13 00:56:19.943293 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-13 00:56:19.943300 | orchestrator | Monday 13 April 2026 00:54:54 +0000 (0:00:05.703) 0:05:29.249 ********** 2026-04-13 00:56:19.943306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:56:19.943313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:56:19.943321 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.943338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:56:19.943363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:56:19.943375 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.943382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:56:19.943389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:56:19.943396 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.943403 | orchestrator | 2026-04-13 00:56:19.943426 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-13 00:56:19.943433 | orchestrator | Monday 13 April 2026 00:54:55 +0000 (0:00:01.194) 0:05:30.443 ********** 2026-04-13 00:56:19.943439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-13 00:56:19.943449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-13 00:56:19.943456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-13 00:56:19.943463 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.943473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-13 00:56:19.943480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-13 00:56:19.943487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-13 00:56:19.943493 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.943499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-13 00:56:19.943529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-13 00:56:19.943537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-13 00:56:19.943546 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.943553 | orchestrator | 2026-04-13 00:56:19.943560 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-13 00:56:19.943566 | orchestrator | Monday 13 April 2026 00:54:57 +0000 (0:00:01.432) 0:05:31.875 ********** 2026-04-13 00:56:19.943573 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.943579 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.943585 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.943592 | orchestrator | 2026-04-13 00:56:19.943598 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-13 00:56:19.943605 | orchestrator | Monday 13 April 2026 00:54:57 +0000 (0:00:00.487) 0:05:32.362 ********** 2026-04-13 00:56:19.943611 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.943618 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.943624 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.943630 | orchestrator | 2026-04-13 00:56:19.943637 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-13 00:56:19.943643 | orchestrator | Monday 13 April 2026 00:54:58 +0000 (0:00:01.427) 0:05:33.790 ********** 2026-04-13 00:56:19.943650 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.943656 | orchestrator | 2026-04-13 00:56:19.943663 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-13 00:56:19.943669 | orchestrator | Monday 13 April 2026 00:55:00 +0000 (0:00:01.734) 0:05:35.525 ********** 2026-04-13 00:56:19.943676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-13 00:56:19.943686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:56:19.943697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.943737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-13 00:56:19.943744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-13 00:56:19.943751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:56:19.943764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:56:19.943771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.943826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.943836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-13 00:56:19.943850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-13 00:56:19.943861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-13 00:56:19.943869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-13 00:56:19.943876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-13 00:56:19.943889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-13 00:56:19.943922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.943939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.943955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.943962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.943969 | orchestrator | 2026-04-13 00:56:19.943975 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-13 00:56:19.943981 | orchestrator | Monday 13 April 2026 00:55:05 +0000 (0:00:04.552) 0:05:40.077 ********** 2026-04-13 00:56:19.943991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-13 00:56:19.943998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:56:19.944005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.944031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-13 00:56:19.944042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-13 00:56:19.944049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-13 00:56:19.944056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:56:19.944069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.944106 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.944123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-13 00:56:19.944134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-13 00:56:19.944143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-13 00:56:19.944157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:56:19.944174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.944186 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.944216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-13 00:56:19.944226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-13 00:56:19.944233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:56:19.944251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:56:19.944257 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944264 | orchestrator | 2026-04-13 00:56:19.944270 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-13 00:56:19.944277 | orchestrator | Monday 13 April 2026 00:55:06 +0000 (0:00:00.873) 0:05:40.951 ********** 2026-04-13 00:56:19.944283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-13 00:56:19.944290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-13 00:56:19.944299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-13 00:56:19.944307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-13 00:56:19.944313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-13 00:56:19.944320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-13 00:56:19.944327 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-13 00:56:19.944341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-13 00:56:19.944354 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-13 00:56:19.944370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-13 00:56:19.944377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-13 00:56:19.944384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-13 00:56:19.944390 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944397 | orchestrator | 2026-04-13 00:56:19.944403 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-13 00:56:19.944424 | orchestrator | Monday 13 April 2026 00:55:07 +0000 (0:00:01.400) 0:05:42.352 ********** 2026-04-13 00:56:19.944435 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944441 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944448 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944454 | orchestrator | 2026-04-13 00:56:19.944460 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-13 00:56:19.944467 | orchestrator | Monday 13 April 2026 00:55:08 +0000 (0:00:00.507) 0:05:42.859 ********** 2026-04-13 00:56:19.944473 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944480 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944486 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944492 | orchestrator | 2026-04-13 00:56:19.944499 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-13 00:56:19.944505 | orchestrator | Monday 13 April 2026 00:55:09 +0000 (0:00:01.496) 0:05:44.356 ********** 2026-04-13 00:56:19.944511 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.944518 | orchestrator | 2026-04-13 00:56:19.944524 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-13 00:56:19.944530 | orchestrator | Monday 13 April 2026 00:55:11 +0000 (0:00:01.668) 0:05:46.025 ********** 2026-04-13 00:56:19.944540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:56:19.944547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:56:19.944563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:56:19.944570 | orchestrator | 2026-04-13 00:56:19.944577 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-13 00:56:19.944583 | orchestrator | Monday 13 April 2026 00:55:14 +0000 (0:00:03.076) 0:05:49.101 ********** 2026-04-13 00:56:19.944590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:56:19.944600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:56:19.944607 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944616 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:56:19.944630 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944637 | orchestrator | 2026-04-13 00:56:19.944646 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-13 00:56:19.944653 | orchestrator | Monday 13 April 2026 00:55:14 +0000 (0:00:00.464) 0:05:49.566 ********** 2026-04-13 00:56:19.944659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-13 00:56:19.944666 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-13 00:56:19.944679 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-13 00:56:19.944692 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944699 | orchestrator | 2026-04-13 00:56:19.944705 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-13 00:56:19.944711 | orchestrator | Monday 13 April 2026 00:55:15 +0000 (0:00:00.668) 0:05:50.234 ********** 2026-04-13 00:56:19.944718 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944724 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944731 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944737 | orchestrator | 2026-04-13 00:56:19.944744 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-13 00:56:19.944750 | orchestrator | Monday 13 April 2026 00:55:16 +0000 (0:00:00.891) 0:05:51.126 ********** 2026-04-13 00:56:19.944756 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944763 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944769 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944776 | orchestrator | 2026-04-13 00:56:19.944782 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-13 00:56:19.944789 | orchestrator | Monday 13 April 2026 00:55:17 +0000 (0:00:01.443) 0:05:52.570 ********** 2026-04-13 00:56:19.944795 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:19.944801 | orchestrator | 2026-04-13 00:56:19.944808 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-13 00:56:19.944814 | orchestrator | Monday 13 April 2026 00:55:19 +0000 (0:00:01.524) 0:05:54.095 ********** 2026-04-13 00:56:19.944823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.944834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.944845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.944852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.944860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.944873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-13 00:56:19.944880 | orchestrator | 2026-04-13 00:56:19.944886 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-13 00:56:19.944893 | orchestrator | Monday 13 April 2026 00:55:26 +0000 (0:00:07.161) 0:06:01.256 ********** 2026-04-13 00:56:19.944903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.944910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.944917 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.944923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.944936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.944944 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.944950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.944960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-13 00:56:19.944967 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.944974 | orchestrator | 2026-04-13 00:56:19.944980 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-13 00:56:19.944987 | orchestrator | Monday 13 April 2026 00:55:27 +0000 (0:00:01.129) 0:06:02.386 ********** 2026-04-13 00:56:19.944993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945024 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945066 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-13 00:56:19.945093 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945099 | orchestrator | 2026-04-13 00:56:19.945106 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-13 00:56:19.945112 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:01.047) 0:06:03.434 ********** 2026-04-13 00:56:19.945118 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.945125 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.945131 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.945138 | orchestrator | 2026-04-13 00:56:19.945147 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-13 00:56:19.945154 | orchestrator | Monday 13 April 2026 00:55:29 +0000 (0:00:01.323) 0:06:04.757 ********** 2026-04-13 00:56:19.945160 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.945167 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.945173 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.945180 | orchestrator | 2026-04-13 00:56:19.945186 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-13 00:56:19.945193 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:02.476) 0:06:07.234 ********** 2026-04-13 00:56:19.945199 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945206 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945212 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945219 | orchestrator | 2026-04-13 00:56:19.945229 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-13 00:56:19.945235 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.853) 0:06:08.087 ********** 2026-04-13 00:56:19.945242 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945248 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945255 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945261 | orchestrator | 2026-04-13 00:56:19.945268 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-13 00:56:19.945274 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.347) 0:06:08.434 ********** 2026-04-13 00:56:19.945281 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945287 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945293 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945299 | orchestrator | 2026-04-13 00:56:19.945306 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-13 00:56:19.945312 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.330) 0:06:08.764 ********** 2026-04-13 00:56:19.945319 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945325 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945332 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945338 | orchestrator | 2026-04-13 00:56:19.945345 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-13 00:56:19.945351 | orchestrator | Monday 13 April 2026 00:55:34 +0000 (0:00:00.340) 0:06:09.105 ********** 2026-04-13 00:56:19.945358 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945364 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945371 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945377 | orchestrator | 2026-04-13 00:56:19.945384 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-13 00:56:19.945390 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.779) 0:06:09.885 ********** 2026-04-13 00:56:19.945397 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945403 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945444 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945453 | orchestrator | 2026-04-13 00:56:19.945460 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-13 00:56:19.945466 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.622) 0:06:10.507 ********** 2026-04-13 00:56:19.945473 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.945479 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.945486 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.945492 | orchestrator | 2026-04-13 00:56:19.945498 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-13 00:56:19.945505 | orchestrator | Monday 13 April 2026 00:55:36 +0000 (0:00:00.712) 0:06:11.220 ********** 2026-04-13 00:56:19.945511 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.945518 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.945524 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.945530 | orchestrator | 2026-04-13 00:56:19.945541 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-13 00:56:19.945547 | orchestrator | Monday 13 April 2026 00:55:37 +0000 (0:00:00.742) 0:06:11.962 ********** 2026-04-13 00:56:19.945554 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.945560 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.945566 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.945573 | orchestrator | 2026-04-13 00:56:19.945579 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-13 00:56:19.945586 | orchestrator | Monday 13 April 2026 00:55:38 +0000 (0:00:00.924) 0:06:12.887 ********** 2026-04-13 00:56:19.945592 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.945598 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.945605 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.945611 | orchestrator | 2026-04-13 00:56:19.945617 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-13 00:56:19.945631 | orchestrator | Monday 13 April 2026 00:55:39 +0000 (0:00:00.978) 0:06:13.865 ********** 2026-04-13 00:56:19.945638 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.945644 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.945649 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.945655 | orchestrator | 2026-04-13 00:56:19.945660 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-13 00:56:19.945666 | orchestrator | Monday 13 April 2026 00:55:39 +0000 (0:00:00.927) 0:06:14.793 ********** 2026-04-13 00:56:19.945672 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.945677 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.945683 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.945688 | orchestrator | 2026-04-13 00:56:19.945694 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-13 00:56:19.945700 | orchestrator | Monday 13 April 2026 00:55:45 +0000 (0:00:05.426) 0:06:20.220 ********** 2026-04-13 00:56:19.945705 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.945711 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.945716 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.945722 | orchestrator | 2026-04-13 00:56:19.945728 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-13 00:56:19.945733 | orchestrator | Monday 13 April 2026 00:55:48 +0000 (0:00:03.253) 0:06:23.474 ********** 2026-04-13 00:56:19.945739 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.945744 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.945750 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.945756 | orchestrator | 2026-04-13 00:56:19.945761 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-13 00:56:19.945770 | orchestrator | Monday 13 April 2026 00:56:04 +0000 (0:00:15.648) 0:06:39.123 ********** 2026-04-13 00:56:19.945776 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.945781 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.945787 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.945793 | orchestrator | 2026-04-13 00:56:19.945798 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-13 00:56:19.945804 | orchestrator | Monday 13 April 2026 00:56:05 +0000 (0:00:00.809) 0:06:39.932 ********** 2026-04-13 00:56:19.945810 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:19.945815 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:19.945821 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:19.945827 | orchestrator | 2026-04-13 00:56:19.945832 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-13 00:56:19.945838 | orchestrator | Monday 13 April 2026 00:56:13 +0000 (0:00:08.337) 0:06:48.270 ********** 2026-04-13 00:56:19.945844 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945849 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945855 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945860 | orchestrator | 2026-04-13 00:56:19.945866 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-13 00:56:19.945872 | orchestrator | Monday 13 April 2026 00:56:14 +0000 (0:00:00.722) 0:06:48.992 ********** 2026-04-13 00:56:19.945877 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945883 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945889 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945894 | orchestrator | 2026-04-13 00:56:19.945900 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-13 00:56:19.945905 | orchestrator | Monday 13 April 2026 00:56:14 +0000 (0:00:00.371) 0:06:49.364 ********** 2026-04-13 00:56:19.945911 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945917 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945922 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945928 | orchestrator | 2026-04-13 00:56:19.945933 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-13 00:56:19.945939 | orchestrator | Monday 13 April 2026 00:56:14 +0000 (0:00:00.375) 0:06:49.739 ********** 2026-04-13 00:56:19.945948 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945954 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945960 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.945965 | orchestrator | 2026-04-13 00:56:19.945971 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-13 00:56:19.945977 | orchestrator | Monday 13 April 2026 00:56:15 +0000 (0:00:00.324) 0:06:50.064 ********** 2026-04-13 00:56:19.945983 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.945988 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.945994 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.946000 | orchestrator | 2026-04-13 00:56:19.946006 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-13 00:56:19.946011 | orchestrator | Monday 13 April 2026 00:56:15 +0000 (0:00:00.732) 0:06:50.796 ********** 2026-04-13 00:56:19.946036 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:19.946042 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:19.946047 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:19.946053 | orchestrator | 2026-04-13 00:56:19.946058 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-13 00:56:19.946064 | orchestrator | Monday 13 April 2026 00:56:16 +0000 (0:00:00.380) 0:06:51.177 ********** 2026-04-13 00:56:19.946070 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.946076 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.946082 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.946088 | orchestrator | 2026-04-13 00:56:19.946093 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-13 00:56:19.946102 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.936) 0:06:52.113 ********** 2026-04-13 00:56:19.946108 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:19.946113 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:19.946119 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:19.946125 | orchestrator | 2026-04-13 00:56:19.946130 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:56:19.946136 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-13 00:56:19.946142 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-13 00:56:19.946148 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-13 00:56:19.946154 | orchestrator | 2026-04-13 00:56:19.946159 | orchestrator | 2026-04-13 00:56:19.946165 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:56:19.946171 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:00.830) 0:06:52.944 ********** 2026-04-13 00:56:19.946177 | orchestrator | =============================================================================== 2026-04-13 00:56:19.946182 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.65s 2026-04-13 00:56:19.946188 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.34s 2026-04-13 00:56:19.946193 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.16s 2026-04-13 00:56:19.946199 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.12s 2026-04-13 00:56:19.946205 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.83s 2026-04-13 00:56:19.946211 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.70s 2026-04-13 00:56:19.946216 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.43s 2026-04-13 00:56:19.946222 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.22s 2026-04-13 00:56:19.946230 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.93s 2026-04-13 00:56:19.946236 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.55s 2026-04-13 00:56:19.946246 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 4.52s 2026-04-13 00:56:19.946252 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.52s 2026-04-13 00:56:19.946257 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.45s 2026-04-13 00:56:19.946263 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.31s 2026-04-13 00:56:19.946269 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.31s 2026-04-13 00:56:19.946274 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.27s 2026-04-13 00:56:19.946280 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.23s 2026-04-13 00:56:19.946285 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.20s 2026-04-13 00:56:19.946291 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.16s 2026-04-13 00:56:19.946297 | orchestrator | proxysql-config : Copying over aodh ProxySQL rules config --------------- 4.15s 2026-04-13 00:56:19.946302 | orchestrator | 2026-04-13 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:22.960106 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:22.961321 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:22.962748 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:22.962790 | orchestrator | 2026-04-13 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:26.006130 | orchestrator | 2026-04-13 00:56:26 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:26.009575 | orchestrator | 2026-04-13 00:56:26 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:26.010743 | orchestrator | 2026-04-13 00:56:26 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:26.011351 | orchestrator | 2026-04-13 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:29.045706 | orchestrator | 2026-04-13 00:56:29 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:29.045893 | orchestrator | 2026-04-13 00:56:29 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:29.047022 | orchestrator | 2026-04-13 00:56:29 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:29.047055 | orchestrator | 2026-04-13 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:32.080281 | orchestrator | 2026-04-13 00:56:32 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:32.083367 | orchestrator | 2026-04-13 00:56:32 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:32.085796 | orchestrator | 2026-04-13 00:56:32 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:32.085852 | orchestrator | 2026-04-13 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:35.130942 | orchestrator | 2026-04-13 00:56:35 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:35.131275 | orchestrator | 2026-04-13 00:56:35 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:35.132459 | orchestrator | 2026-04-13 00:56:35 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:35.132505 | orchestrator | 2026-04-13 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:38.191901 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:38.193321 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:38.196177 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:38.196257 | orchestrator | 2026-04-13 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:41.253532 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:41.254318 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:41.255026 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:41.255076 | orchestrator | 2026-04-13 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:44.321816 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:44.322768 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:44.326693 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:44.326762 | orchestrator | 2026-04-13 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:47.371018 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:47.371738 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:47.372740 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:47.372963 | orchestrator | 2026-04-13 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:50.420059 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:50.420909 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:50.421768 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:50.422119 | orchestrator | 2026-04-13 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:53.462112 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:53.465915 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:53.468890 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:53.468988 | orchestrator | 2026-04-13 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:56.599846 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:56.601253 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:56.602951 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:56.602996 | orchestrator | 2026-04-13 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:59.653596 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:56:59.653722 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:56:59.654334 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:56:59.654539 | orchestrator | 2026-04-13 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:02.703099 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:02.705237 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:02.705309 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:02.705332 | orchestrator | 2026-04-13 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:05.749051 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:05.750370 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:05.752482 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:05.752523 | orchestrator | 2026-04-13 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:08.801142 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:08.804304 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:08.807117 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:08.807205 | orchestrator | 2026-04-13 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:11.850587 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:11.853027 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:11.854879 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:11.854909 | orchestrator | 2026-04-13 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:14.897074 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:14.897148 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:14.898201 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:14.898218 | orchestrator | 2026-04-13 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:17.961808 | orchestrator | 2026-04-13 00:57:17 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:17.964406 | orchestrator | 2026-04-13 00:57:17 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:17.966324 | orchestrator | 2026-04-13 00:57:17 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:17.966396 | orchestrator | 2026-04-13 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:21.016728 | orchestrator | 2026-04-13 00:57:21 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:21.020411 | orchestrator | 2026-04-13 00:57:21 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:21.027005 | orchestrator | 2026-04-13 00:57:21 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:21.027086 | orchestrator | 2026-04-13 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:24.075326 | orchestrator | 2026-04-13 00:57:24 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:24.077003 | orchestrator | 2026-04-13 00:57:24 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:24.079252 | orchestrator | 2026-04-13 00:57:24 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:24.079309 | orchestrator | 2026-04-13 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:27.126740 | orchestrator | 2026-04-13 00:57:27 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:27.128806 | orchestrator | 2026-04-13 00:57:27 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:27.131122 | orchestrator | 2026-04-13 00:57:27 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:27.131191 | orchestrator | 2026-04-13 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:30.179017 | orchestrator | 2026-04-13 00:57:30 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:30.180853 | orchestrator | 2026-04-13 00:57:30 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:30.183488 | orchestrator | 2026-04-13 00:57:30 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:30.183534 | orchestrator | 2026-04-13 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:33.233296 | orchestrator | 2026-04-13 00:57:33 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:33.234409 | orchestrator | 2026-04-13 00:57:33 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:33.236658 | orchestrator | 2026-04-13 00:57:33 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:33.237108 | orchestrator | 2026-04-13 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:36.280821 | orchestrator | 2026-04-13 00:57:36 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:36.282709 | orchestrator | 2026-04-13 00:57:36 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:36.284028 | orchestrator | 2026-04-13 00:57:36 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:36.284058 | orchestrator | 2026-04-13 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:39.340462 | orchestrator | 2026-04-13 00:57:39 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:39.342265 | orchestrator | 2026-04-13 00:57:39 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:39.343758 | orchestrator | 2026-04-13 00:57:39 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:39.343800 | orchestrator | 2026-04-13 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:42.398669 | orchestrator | 2026-04-13 00:57:42 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:42.399557 | orchestrator | 2026-04-13 00:57:42 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:42.400461 | orchestrator | 2026-04-13 00:57:42 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:42.400489 | orchestrator | 2026-04-13 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:45.450133 | orchestrator | 2026-04-13 00:57:45 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:45.450878 | orchestrator | 2026-04-13 00:57:45 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:45.452316 | orchestrator | 2026-04-13 00:57:45 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:45.452366 | orchestrator | 2026-04-13 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:48.514552 | orchestrator | 2026-04-13 00:57:48 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:48.518419 | orchestrator | 2026-04-13 00:57:48 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:48.521727 | orchestrator | 2026-04-13 00:57:48 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:48.521790 | orchestrator | 2026-04-13 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:51.560663 | orchestrator | 2026-04-13 00:57:51 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:51.563617 | orchestrator | 2026-04-13 00:57:51 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state STARTED 2026-04-13 00:57:51.568647 | orchestrator | 2026-04-13 00:57:51 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:51.568731 | orchestrator | 2026-04-13 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:54.602554 | orchestrator | 2026-04-13 00:57:54 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:54.612456 | orchestrator | 2026-04-13 00:57:54 | INFO  | Task c28ca6f6-7852-490e-9d41-53793fbfd339 is in state SUCCESS 2026-04-13 00:57:54.614533 | orchestrator | 2026-04-13 00:57:54.614588 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 00:57:54.614600 | orchestrator | 2.16.14 2026-04-13 00:57:54.614609 | orchestrator | 2026-04-13 00:57:54.614618 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-13 00:57:54.614626 | orchestrator | 2026-04-13 00:57:54.614634 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-13 00:57:54.614643 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.848) 0:00:00.848 ********** 2026-04-13 00:57:54.614652 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.614661 | orchestrator | 2026-04-13 00:57:54.614669 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-13 00:57:54.614677 | orchestrator | Monday 13 April 2026 00:46:30 +0000 (0:00:01.374) 0:00:02.223 ********** 2026-04-13 00:57:54.614685 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.614693 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.614701 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.614709 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.614717 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.614725 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.614734 | orchestrator | 2026-04-13 00:57:54.614742 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-13 00:57:54.614750 | orchestrator | Monday 13 April 2026 00:46:32 +0000 (0:00:01.855) 0:00:04.078 ********** 2026-04-13 00:57:54.614758 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.614766 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.614863 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.614875 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.614883 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.614891 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.614899 | orchestrator | 2026-04-13 00:57:54.614907 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-13 00:57:54.614916 | orchestrator | Monday 13 April 2026 00:46:33 +0000 (0:00:00.724) 0:00:04.803 ********** 2026-04-13 00:57:54.614924 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.614932 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.614940 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.614948 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.614956 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.614964 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.614972 | orchestrator | 2026-04-13 00:57:54.614980 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-13 00:57:54.614988 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:01.111) 0:00:05.914 ********** 2026-04-13 00:57:54.614996 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.615004 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.615012 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.615019 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.615027 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.615035 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.615043 | orchestrator | 2026-04-13 00:57:54.615056 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-13 00:57:54.615074 | orchestrator | Monday 13 April 2026 00:46:35 +0000 (0:00:00.862) 0:00:06.777 ********** 2026-04-13 00:57:54.615092 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.615105 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.615762 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.615793 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.615806 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.615820 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.615831 | orchestrator | 2026-04-13 00:57:54.615839 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-13 00:57:54.615848 | orchestrator | Monday 13 April 2026 00:46:36 +0000 (0:00:01.292) 0:00:08.070 ********** 2026-04-13 00:57:54.615856 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.615864 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.615872 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.615880 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.615888 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.615896 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.615904 | orchestrator | 2026-04-13 00:57:54.615912 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-13 00:57:54.615921 | orchestrator | Monday 13 April 2026 00:46:38 +0000 (0:00:01.301) 0:00:09.371 ********** 2026-04-13 00:57:54.615929 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.615938 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.615946 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.615954 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.615962 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.615970 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.615978 | orchestrator | 2026-04-13 00:57:54.615986 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-13 00:57:54.615994 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:01.371) 0:00:10.742 ********** 2026-04-13 00:57:54.616002 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.616010 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.616018 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.616026 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.616034 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.616042 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.616050 | orchestrator | 2026-04-13 00:57:54.616059 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-13 00:57:54.616077 | orchestrator | Monday 13 April 2026 00:46:40 +0000 (0:00:01.159) 0:00:11.902 ********** 2026-04-13 00:57:54.616086 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:57:54.616098 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:57:54.616111 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:57:54.616125 | orchestrator | 2026-04-13 00:57:54.617003 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-13 00:57:54.617020 | orchestrator | Monday 13 April 2026 00:46:41 +0000 (0:00:01.167) 0:00:13.069 ********** 2026-04-13 00:57:54.617029 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.617037 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.617045 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.617053 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.617318 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.617341 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.617376 | orchestrator | 2026-04-13 00:57:54.617385 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-13 00:57:54.617393 | orchestrator | Monday 13 April 2026 00:46:43 +0000 (0:00:01.929) 0:00:14.999 ********** 2026-04-13 00:57:54.617402 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:57:54.617410 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:57:54.617418 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:57:54.617426 | orchestrator | 2026-04-13 00:57:54.617435 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-13 00:57:54.617768 | orchestrator | Monday 13 April 2026 00:46:46 +0000 (0:00:02.585) 0:00:17.584 ********** 2026-04-13 00:57:54.617787 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:57:54.617799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:57:54.617812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:57:54.617827 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.617841 | orchestrator | 2026-04-13 00:57:54.617855 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-13 00:57:54.617869 | orchestrator | Monday 13 April 2026 00:46:46 +0000 (0:00:00.438) 0:00:18.023 ********** 2026-04-13 00:57:54.617879 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.617890 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.617898 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.617906 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.617915 | orchestrator | 2026-04-13 00:57:54.617923 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-13 00:57:54.617931 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:01.619) 0:00:19.642 ********** 2026-04-13 00:57:54.617941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.617963 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.618136 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.618148 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.618156 | orchestrator | 2026-04-13 00:57:54.618164 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-13 00:57:54.618172 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:00.350) 0:00:19.993 ********** 2026-04-13 00:57:54.618923 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-13 00:46:44.465938', 'end': '2026-04-13 00:46:44.560712', 'delta': '0:00:00.094774', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.618960 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-13 00:46:45.229161', 'end': '2026-04-13 00:46:45.312505', 'delta': '0:00:00.083344', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.618976 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-13 00:46:46.084730', 'end': '2026-04-13 00:46:46.176023', 'delta': '0:00:00.091293', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.618991 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619004 | orchestrator | 2026-04-13 00:57:54.619018 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-13 00:57:54.619033 | orchestrator | Monday 13 April 2026 00:46:49 +0000 (0:00:00.663) 0:00:20.657 ********** 2026-04-13 00:57:54.619047 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.619062 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.619076 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.619084 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.619107 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.619122 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.619136 | orchestrator | 2026-04-13 00:57:54.619148 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-13 00:57:54.619157 | orchestrator | Monday 13 April 2026 00:46:52 +0000 (0:00:03.125) 0:00:23.782 ********** 2026-04-13 00:57:54.619165 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.619173 | orchestrator | 2026-04-13 00:57:54.619181 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-13 00:57:54.619189 | orchestrator | Monday 13 April 2026 00:46:53 +0000 (0:00:00.678) 0:00:24.460 ********** 2026-04-13 00:57:54.619197 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619205 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.619213 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.619222 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.619230 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.619238 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.619246 | orchestrator | 2026-04-13 00:57:54.619254 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-13 00:57:54.619262 | orchestrator | Monday 13 April 2026 00:46:54 +0000 (0:00:01.012) 0:00:25.473 ********** 2026-04-13 00:57:54.619270 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619278 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.619286 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.619294 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.619302 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.619310 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.619318 | orchestrator | 2026-04-13 00:57:54.619326 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 00:57:54.619334 | orchestrator | Monday 13 April 2026 00:46:55 +0000 (0:00:01.253) 0:00:26.727 ********** 2026-04-13 00:57:54.619369 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619379 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.619387 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.619395 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.619403 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.619411 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.619420 | orchestrator | 2026-04-13 00:57:54.619428 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-13 00:57:54.619436 | orchestrator | Monday 13 April 2026 00:46:56 +0000 (0:00:00.757) 0:00:27.484 ********** 2026-04-13 00:57:54.619444 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619452 | orchestrator | 2026-04-13 00:57:54.619460 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-13 00:57:54.619468 | orchestrator | Monday 13 April 2026 00:46:56 +0000 (0:00:00.111) 0:00:27.596 ********** 2026-04-13 00:57:54.619476 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619501 | orchestrator | 2026-04-13 00:57:54.619510 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 00:57:54.619518 | orchestrator | Monday 13 April 2026 00:46:56 +0000 (0:00:00.288) 0:00:27.885 ********** 2026-04-13 00:57:54.619526 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619540 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.619549 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.619557 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.619566 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.619576 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.619585 | orchestrator | 2026-04-13 00:57:54.619679 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-13 00:57:54.619698 | orchestrator | Monday 13 April 2026 00:46:57 +0000 (0:00:00.840) 0:00:28.726 ********** 2026-04-13 00:57:54.619714 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619728 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.619743 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.619771 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.619787 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.619797 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.619807 | orchestrator | 2026-04-13 00:57:54.619816 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-13 00:57:54.619825 | orchestrator | Monday 13 April 2026 00:46:58 +0000 (0:00:01.450) 0:00:30.177 ********** 2026-04-13 00:57:54.619834 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619843 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.619852 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.619861 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.619870 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.619879 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.619888 | orchestrator | 2026-04-13 00:57:54.619897 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-13 00:57:54.619906 | orchestrator | Monday 13 April 2026 00:46:59 +0000 (0:00:00.769) 0:00:30.946 ********** 2026-04-13 00:57:54.619915 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619924 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.619933 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.619942 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.619951 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.619958 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.619967 | orchestrator | 2026-04-13 00:57:54.619975 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-13 00:57:54.619983 | orchestrator | Monday 13 April 2026 00:47:00 +0000 (0:00:00.929) 0:00:31.876 ********** 2026-04-13 00:57:54.619991 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.619999 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.620007 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.620015 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.620023 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.620032 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.620040 | orchestrator | 2026-04-13 00:57:54.620066 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-13 00:57:54.620075 | orchestrator | Monday 13 April 2026 00:47:01 +0000 (0:00:00.852) 0:00:32.730 ********** 2026-04-13 00:57:54.620083 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.620091 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.620099 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.620107 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.620115 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.620123 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.620131 | orchestrator | 2026-04-13 00:57:54.620139 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-13 00:57:54.620148 | orchestrator | Monday 13 April 2026 00:47:04 +0000 (0:00:03.155) 0:00:35.886 ********** 2026-04-13 00:57:54.620156 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.620164 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.620178 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.620191 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.620205 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.620218 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.620231 | orchestrator | 2026-04-13 00:57:54.620239 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-13 00:57:54.620248 | orchestrator | Monday 13 April 2026 00:47:06 +0000 (0:00:01.972) 0:00:37.858 ********** 2026-04-13 00:57:54.620257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.620570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.620587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620601 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.620616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part1', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part14', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part15', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part16', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.620863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.620889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.620997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829', 'dm-uuid-LVM-Mr1Q93NeSsnqlaYzlizzQ82P3R69N73YnF4wV9m7xmyazb6rsYJT7xb0zocD08yt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6', 'dm-uuid-LVM-Zv4PurkWYeoDs9KB6u8YAxs5qYmjOzJ7edNlLzVRvDP617MCxld659gQGqVso69K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621238 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.621246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605', 'dm-uuid-LVM-tRfeWyEsbCcRzYaI0KmmkGukknCbNfxEirUZgI6deh8waBk2mMICIOw8e11sjiBA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621382 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.621396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657', 'dm-uuid-LVM-BgltwyKEc1hQK7TJ3EvhVOEE61h7GR8jqzNvFt9z9mBySS0of86UAOJIH8eRQC1B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vjQyqS-0O2i-oUfi-QrIp-EEvb-mkza-Ay8B4d', 'scsi-0QEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77', 'scsi-SQEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8A021o-0SEM-qE3F-L4Wz-tepU-5Ebc-2TkWkY', 'scsi-0QEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f', 'scsi-SQEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wPL5U6-PwRf-m1u5-PNtC-WxG6-QRHR-4sCXGb', 'scsi-0QEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605', 'scsi-SQEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XtfceA-Mbkv-edmG-nfsU-T6X9-jaeN-0URiWL', 'scsi-0QEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194', 'scsi-SQEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e', 'scsi-SQEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4', 'dm-uuid-LVM-wGY9KIRhm7IaVKgPekBld64Nsr4cXFHYTnMbF7axTSTNUFWMfy3NmO8CcXI9BhjY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621934 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.621948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05', 'scsi-SQEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.621962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d', 'dm-uuid-LVM-GnYlbSDmKKf8kqe05EYZgvzXvfiTNv27Pd4xX5u2Umcq5s1KRyrmBZw287rcJfR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.621996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:57:54.622331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.622418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IkZIFC-QO06-F9OK-5MzU-4gck-L7wj-os076W', 'scsi-0QEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a', 'scsi-SQEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.622436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g1Hjx0-VEhr-pSSU-0d55-M01s-wOvL-5jZgev', 'scsi-0QEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7', 'scsi-SQEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.622472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.622482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923', 'scsi-SQEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.622558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:57:54.622577 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.622591 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.622605 | orchestrator | 2026-04-13 00:57:54.622619 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-13 00:57:54.622633 | orchestrator | Monday 13 April 2026 00:47:09 +0000 (0:00:03.260) 0:00:41.118 ********** 2026-04-13 00:57:54.622646 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622682 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622696 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622710 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622724 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622743 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622846 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622918 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc9058a9-513c-44a1-a232-346d8ffae651-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.622992 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623018 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623033 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623050 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623061 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623070 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623079 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623166 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623181 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part1', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part14', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part15', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part16', 'scsi-SQEMU_QEMU_HARDDISK_a7dd9f71-8adc-487c-8257-2cef985b8ae9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623201 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623223 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.623311 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623329 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623361 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623379 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623396 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623411 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623492 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623519 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623534 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.623550 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b3da9c0-5113-4abf-81e6-0eb99113ad06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623565 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829', 'dm-uuid-LVM-Mr1Q93NeSsnqlaYzlizzQ82P3R69N73YnF4wV9m7xmyazb6rsYJT7xb0zocD08yt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6', 'dm-uuid-LVM-Zv4PurkWYeoDs9KB6u8YAxs5qYmjOzJ7edNlLzVRvDP617MCxld659gQGqVso69K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623725 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.623739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4', 'dm-uuid-LVM-wGY9KIRhm7IaVKgPekBld64Nsr4cXFHYTnMbF7axTSTNUFWMfy3NmO8CcXI9BhjY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d', 'dm-uuid-LVM-GnYlbSDmKKf8kqe05EYZgvzXvfiTNv27Pd4xX5u2Umcq5s1KRyrmBZw287rcJfR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605', 'dm-uuid-LVM-tRfeWyEsbCcRzYaI0KmmkGukknCbNfxEirUZgI6deh8waBk2mMICIOw8e11sjiBA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.623962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657', 'dm-uuid-LVM-BgltwyKEc1hQK7TJ3EvhVOEE61h7GR8jqzNvFt9z9mBySS0of86UAOJIH8eRQC1B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624226 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vjQyqS-0O2i-oUfi-QrIp-EEvb-mkza-Ay8B4d', 'scsi-0QEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77', 'scsi-SQEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8A021o-0SEM-qE3F-L4Wz-tepU-5Ebc-2TkWkY', 'scsi-0QEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f', 'scsi-SQEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05', 'scsi-SQEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624451 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624503 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.624596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624615 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624777 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wPL5U6-PwRf-m1u5-PNtC-WxG6-QRHR-4sCXGb', 'scsi-0QEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605', 'scsi-SQEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624830 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XtfceA-Mbkv-edmG-nfsU-T6X9-jaeN-0URiWL', 'scsi-0QEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194', 'scsi-SQEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e', 'scsi-SQEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.624951 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.624965 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.625067 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.625100 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IkZIFC-QO06-F9OK-5MzU-4gck-L7wj-os076W', 'scsi-0QEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a', 'scsi-SQEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.625117 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g1Hjx0-VEhr-pSSU-0d55-M01s-wOvL-5jZgev', 'scsi-0QEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7', 'scsi-SQEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.625140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923', 'scsi-SQEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.625156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:57:54.625182 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.625194 | orchestrator | 2026-04-13 00:57:54.625203 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-13 00:57:54.625211 | orchestrator | Monday 13 April 2026 00:47:12 +0000 (0:00:03.014) 0:00:44.133 ********** 2026-04-13 00:57:54.625305 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.625320 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.625392 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.625404 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.625412 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.625420 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.625428 | orchestrator | 2026-04-13 00:57:54.625436 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-13 00:57:54.625445 | orchestrator | Monday 13 April 2026 00:47:14 +0000 (0:00:02.050) 0:00:46.184 ********** 2026-04-13 00:57:54.625453 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.625461 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.625469 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.625477 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.625485 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.625493 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.625501 | orchestrator | 2026-04-13 00:57:54.625509 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 00:57:54.625517 | orchestrator | Monday 13 April 2026 00:47:16 +0000 (0:00:01.328) 0:00:47.513 ********** 2026-04-13 00:57:54.625525 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.625533 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.625541 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.625549 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.625557 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.625566 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.625574 | orchestrator | 2026-04-13 00:57:54.625582 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 00:57:54.625590 | orchestrator | Monday 13 April 2026 00:47:17 +0000 (0:00:01.675) 0:00:49.188 ********** 2026-04-13 00:57:54.625598 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.625607 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.625615 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.625630 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.625638 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.625646 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.625654 | orchestrator | 2026-04-13 00:57:54.625663 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 00:57:54.625671 | orchestrator | Monday 13 April 2026 00:47:18 +0000 (0:00:01.023) 0:00:50.212 ********** 2026-04-13 00:57:54.625679 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.625687 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.625695 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.625703 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.625711 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.625719 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.625726 | orchestrator | 2026-04-13 00:57:54.625733 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 00:57:54.625739 | orchestrator | Monday 13 April 2026 00:47:21 +0000 (0:00:02.399) 0:00:52.611 ********** 2026-04-13 00:57:54.625746 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.625753 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.625760 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.625767 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.625774 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.625780 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.625788 | orchestrator | 2026-04-13 00:57:54.625794 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-13 00:57:54.625801 | orchestrator | Monday 13 April 2026 00:47:22 +0000 (0:00:01.358) 0:00:53.970 ********** 2026-04-13 00:57:54.625808 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:57:54.625815 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-13 00:57:54.625822 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-13 00:57:54.625829 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-13 00:57:54.625836 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-13 00:57:54.625843 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-13 00:57:54.625850 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-13 00:57:54.625857 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-13 00:57:54.625863 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-13 00:57:54.625870 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-13 00:57:54.625877 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-13 00:57:54.625884 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-13 00:57:54.625891 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-13 00:57:54.625898 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-13 00:57:54.625905 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-13 00:57:54.625912 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-13 00:57:54.625920 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-13 00:57:54.625928 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-13 00:57:54.625935 | orchestrator | 2026-04-13 00:57:54.625944 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-13 00:57:54.625952 | orchestrator | Monday 13 April 2026 00:47:26 +0000 (0:00:03.478) 0:00:57.448 ********** 2026-04-13 00:57:54.625960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:57:54.625968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:57:54.625976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:57:54.625984 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.625992 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-13 00:57:54.626000 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-13 00:57:54.626037 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-13 00:57:54.626051 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-13 00:57:54.626059 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-13 00:57:54.626067 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-13 00:57:54.626103 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.626111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 00:57:54.626119 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.626127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 00:57:54.626135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 00:57:54.626143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-13 00:57:54.626150 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626158 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-13 00:57:54.626165 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-13 00:57:54.626173 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.626181 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-13 00:57:54.626189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-13 00:57:54.626197 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-13 00:57:54.626204 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.626212 | orchestrator | 2026-04-13 00:57:54.626220 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-13 00:57:54.626228 | orchestrator | Monday 13 April 2026 00:47:27 +0000 (0:00:01.246) 0:00:58.695 ********** 2026-04-13 00:57:54.626235 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.626243 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.626251 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.626259 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.626267 | orchestrator | 2026-04-13 00:57:54.626275 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-13 00:57:54.626283 | orchestrator | Monday 13 April 2026 00:47:28 +0000 (0:00:01.526) 0:01:00.221 ********** 2026-04-13 00:57:54.626292 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626300 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.626308 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.626316 | orchestrator | 2026-04-13 00:57:54.626324 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-13 00:57:54.626330 | orchestrator | Monday 13 April 2026 00:47:29 +0000 (0:00:00.383) 0:01:00.604 ********** 2026-04-13 00:57:54.626337 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626359 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.626367 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.626374 | orchestrator | 2026-04-13 00:57:54.626381 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-13 00:57:54.626388 | orchestrator | Monday 13 April 2026 00:47:29 +0000 (0:00:00.486) 0:01:01.091 ********** 2026-04-13 00:57:54.626395 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626402 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.626409 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.626415 | orchestrator | 2026-04-13 00:57:54.626422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-13 00:57:54.626429 | orchestrator | Monday 13 April 2026 00:47:30 +0000 (0:00:00.701) 0:01:01.793 ********** 2026-04-13 00:57:54.626436 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.626443 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.626449 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.626456 | orchestrator | 2026-04-13 00:57:54.626463 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-13 00:57:54.626475 | orchestrator | Monday 13 April 2026 00:47:32 +0000 (0:00:01.545) 0:01:03.338 ********** 2026-04-13 00:57:54.626482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.626488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.626495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.626502 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626509 | orchestrator | 2026-04-13 00:57:54.626515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-13 00:57:54.626522 | orchestrator | Monday 13 April 2026 00:47:32 +0000 (0:00:00.373) 0:01:03.711 ********** 2026-04-13 00:57:54.626529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.626536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.626542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.626549 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626556 | orchestrator | 2026-04-13 00:57:54.626563 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-13 00:57:54.626569 | orchestrator | Monday 13 April 2026 00:47:32 +0000 (0:00:00.403) 0:01:04.115 ********** 2026-04-13 00:57:54.626576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.626583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.626590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.626596 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626603 | orchestrator | 2026-04-13 00:57:54.626610 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-13 00:57:54.626617 | orchestrator | Monday 13 April 2026 00:47:33 +0000 (0:00:00.445) 0:01:04.560 ********** 2026-04-13 00:57:54.626624 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.626631 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.626638 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.626644 | orchestrator | 2026-04-13 00:57:54.626651 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-13 00:57:54.626658 | orchestrator | Monday 13 April 2026 00:47:33 +0000 (0:00:00.307) 0:01:04.868 ********** 2026-04-13 00:57:54.626668 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-13 00:57:54.626676 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 00:57:54.626683 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-13 00:57:54.626690 | orchestrator | 2026-04-13 00:57:54.626718 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-13 00:57:54.626726 | orchestrator | Monday 13 April 2026 00:47:34 +0000 (0:00:00.807) 0:01:05.675 ********** 2026-04-13 00:57:54.626733 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:57:54.626740 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:57:54.626747 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:57:54.626754 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-13 00:57:54.626761 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 00:57:54.626768 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 00:57:54.626775 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 00:57:54.626782 | orchestrator | 2026-04-13 00:57:54.626789 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-13 00:57:54.626796 | orchestrator | Monday 13 April 2026 00:47:35 +0000 (0:00:01.281) 0:01:06.957 ********** 2026-04-13 00:57:54.626803 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:57:54.626810 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:57:54.626824 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:57:54.626831 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-13 00:57:54.626837 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 00:57:54.626844 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 00:57:54.626851 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 00:57:54.626858 | orchestrator | 2026-04-13 00:57:54.626865 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:57:54.626872 | orchestrator | Monday 13 April 2026 00:47:37 +0000 (0:00:02.135) 0:01:09.093 ********** 2026-04-13 00:57:54.626879 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.626887 | orchestrator | 2026-04-13 00:57:54.626894 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:57:54.626900 | orchestrator | Monday 13 April 2026 00:47:39 +0000 (0:00:01.401) 0:01:10.495 ********** 2026-04-13 00:57:54.626907 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.626914 | orchestrator | 2026-04-13 00:57:54.626921 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:57:54.626928 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:01.671) 0:01:12.166 ********** 2026-04-13 00:57:54.626935 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.626942 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.626949 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.626956 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.626962 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.626969 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.626976 | orchestrator | 2026-04-13 00:57:54.626983 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:57:54.626990 | orchestrator | Monday 13 April 2026 00:47:42 +0000 (0:00:01.341) 0:01:13.508 ********** 2026-04-13 00:57:54.626996 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627003 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627010 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627017 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627023 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627030 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627037 | orchestrator | 2026-04-13 00:57:54.627044 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:57:54.627051 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:01.503) 0:01:15.011 ********** 2026-04-13 00:57:54.627057 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627064 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627071 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627078 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627085 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627091 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627098 | orchestrator | 2026-04-13 00:57:54.627105 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:57:54.627112 | orchestrator | Monday 13 April 2026 00:47:45 +0000 (0:00:01.349) 0:01:16.361 ********** 2026-04-13 00:57:54.627119 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627125 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627132 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627139 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627145 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627152 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627159 | orchestrator | 2026-04-13 00:57:54.627166 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:57:54.627177 | orchestrator | Monday 13 April 2026 00:47:47 +0000 (0:00:01.914) 0:01:18.275 ********** 2026-04-13 00:57:54.627184 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.627191 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.627198 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.627208 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.627215 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.627221 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.627228 | orchestrator | 2026-04-13 00:57:54.627235 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:57:54.627262 | orchestrator | Monday 13 April 2026 00:47:47 +0000 (0:00:00.825) 0:01:19.100 ********** 2026-04-13 00:57:54.627271 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627278 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627284 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627291 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.627298 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.627305 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.627312 | orchestrator | 2026-04-13 00:57:54.627319 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:57:54.627326 | orchestrator | Monday 13 April 2026 00:47:48 +0000 (0:00:00.988) 0:01:20.089 ********** 2026-04-13 00:57:54.627333 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627340 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627360 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627368 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.627374 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.627381 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.627388 | orchestrator | 2026-04-13 00:57:54.627395 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:57:54.627402 | orchestrator | Monday 13 April 2026 00:47:49 +0000 (0:00:00.689) 0:01:20.778 ********** 2026-04-13 00:57:54.627409 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.627415 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.627422 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.627429 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627436 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627443 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627449 | orchestrator | 2026-04-13 00:57:54.627456 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:57:54.627463 | orchestrator | Monday 13 April 2026 00:47:51 +0000 (0:00:01.931) 0:01:22.709 ********** 2026-04-13 00:57:54.627470 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.627477 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.627483 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.627490 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627497 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627504 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627510 | orchestrator | 2026-04-13 00:57:54.627517 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:57:54.627524 | orchestrator | Monday 13 April 2026 00:47:52 +0000 (0:00:01.550) 0:01:24.260 ********** 2026-04-13 00:57:54.627531 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627538 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627545 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627552 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.627558 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.627565 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.627572 | orchestrator | 2026-04-13 00:57:54.627579 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:57:54.627586 | orchestrator | Monday 13 April 2026 00:47:53 +0000 (0:00:00.926) 0:01:25.186 ********** 2026-04-13 00:57:54.627593 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.627600 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.627612 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.627618 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.627625 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.627632 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.627639 | orchestrator | 2026-04-13 00:57:54.627646 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:57:54.627653 | orchestrator | Monday 13 April 2026 00:47:55 +0000 (0:00:01.260) 0:01:26.447 ********** 2026-04-13 00:57:54.627659 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627666 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627673 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627680 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627687 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627693 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627700 | orchestrator | 2026-04-13 00:57:54.627707 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:57:54.627714 | orchestrator | Monday 13 April 2026 00:47:56 +0000 (0:00:01.273) 0:01:27.721 ********** 2026-04-13 00:57:54.627721 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627728 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627734 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627741 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627748 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627755 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627762 | orchestrator | 2026-04-13 00:57:54.627768 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:57:54.627775 | orchestrator | Monday 13 April 2026 00:47:57 +0000 (0:00:00.861) 0:01:28.583 ********** 2026-04-13 00:57:54.627782 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627789 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627796 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627803 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.627810 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.627816 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.627823 | orchestrator | 2026-04-13 00:57:54.627830 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:57:54.627837 | orchestrator | Monday 13 April 2026 00:47:58 +0000 (0:00:01.102) 0:01:29.685 ********** 2026-04-13 00:57:54.627844 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627851 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627857 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627864 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.627871 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.627878 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.627884 | orchestrator | 2026-04-13 00:57:54.627891 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:57:54.627898 | orchestrator | Monday 13 April 2026 00:47:59 +0000 (0:00:00.742) 0:01:30.427 ********** 2026-04-13 00:57:54.627905 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.627912 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.627922 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.627929 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.627936 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.627943 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.627950 | orchestrator | 2026-04-13 00:57:54.627978 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:57:54.627986 | orchestrator | Monday 13 April 2026 00:48:00 +0000 (0:00:01.264) 0:01:31.692 ********** 2026-04-13 00:57:54.627993 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.628000 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.628007 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.628014 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.628021 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.628028 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.628040 | orchestrator | 2026-04-13 00:57:54.628047 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:57:54.628054 | orchestrator | Monday 13 April 2026 00:48:01 +0000 (0:00:00.911) 0:01:32.603 ********** 2026-04-13 00:57:54.628060 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.628067 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.628074 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.628081 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.628087 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.628094 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.628101 | orchestrator | 2026-04-13 00:57:54.628108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:57:54.628115 | orchestrator | Monday 13 April 2026 00:48:02 +0000 (0:00:00.865) 0:01:33.469 ********** 2026-04-13 00:57:54.628122 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.628128 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.628135 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.628142 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.628148 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.628155 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.628162 | orchestrator | 2026-04-13 00:57:54.628169 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-13 00:57:54.628176 | orchestrator | Monday 13 April 2026 00:48:03 +0000 (0:00:01.347) 0:01:34.817 ********** 2026-04-13 00:57:54.628183 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.628190 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.628197 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.628204 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.628210 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.628217 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.628224 | orchestrator | 2026-04-13 00:57:54.628231 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-13 00:57:54.628238 | orchestrator | Monday 13 April 2026 00:48:05 +0000 (0:00:01.682) 0:01:36.500 ********** 2026-04-13 00:57:54.628245 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.628251 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.628258 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.628265 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.628272 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.628278 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.628285 | orchestrator | 2026-04-13 00:57:54.628292 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-13 00:57:54.628299 | orchestrator | Monday 13 April 2026 00:48:07 +0000 (0:00:02.475) 0:01:38.975 ********** 2026-04-13 00:57:54.628306 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.628313 | orchestrator | 2026-04-13 00:57:54.628320 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-13 00:57:54.628327 | orchestrator | Monday 13 April 2026 00:48:08 +0000 (0:00:01.239) 0:01:40.215 ********** 2026-04-13 00:57:54.628334 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.628341 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.628382 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.628389 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.628396 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.628403 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.628410 | orchestrator | 2026-04-13 00:57:54.628417 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-13 00:57:54.628424 | orchestrator | Monday 13 April 2026 00:48:09 +0000 (0:00:00.644) 0:01:40.860 ********** 2026-04-13 00:57:54.628431 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.628437 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.628444 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.628457 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.628464 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.628470 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.628477 | orchestrator | 2026-04-13 00:57:54.628484 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-13 00:57:54.628491 | orchestrator | Monday 13 April 2026 00:48:10 +0000 (0:00:01.285) 0:01:42.145 ********** 2026-04-13 00:57:54.628498 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:57:54.628505 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:57:54.628511 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:57:54.628518 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:57:54.628525 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:57:54.628532 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:57:54.628539 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:57:54.628545 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:57:54.628552 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:57:54.628562 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:57:54.628569 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:57:54.628598 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:57:54.628606 | orchestrator | 2026-04-13 00:57:54.628613 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-13 00:57:54.628619 | orchestrator | Monday 13 April 2026 00:48:12 +0000 (0:00:01.658) 0:01:43.804 ********** 2026-04-13 00:57:54.628626 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.628633 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.628640 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.628647 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.628653 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.628660 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.628667 | orchestrator | 2026-04-13 00:57:54.628674 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-13 00:57:54.628681 | orchestrator | Monday 13 April 2026 00:48:13 +0000 (0:00:01.317) 0:01:45.121 ********** 2026-04-13 00:57:54.628687 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.628694 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.628701 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.628708 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.628714 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.628721 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.628728 | orchestrator | 2026-04-13 00:57:54.628735 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-13 00:57:54.628741 | orchestrator | Monday 13 April 2026 00:48:14 +0000 (0:00:00.552) 0:01:45.673 ********** 2026-04-13 00:57:54.628748 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.628755 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.628762 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.628769 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.628775 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.628782 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.628789 | orchestrator | 2026-04-13 00:57:54.628796 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-13 00:57:54.628803 | orchestrator | Monday 13 April 2026 00:48:15 +0000 (0:00:00.921) 0:01:46.595 ********** 2026-04-13 00:57:54.628815 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.628822 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.628828 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.628835 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.628842 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.628849 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.628856 | orchestrator | 2026-04-13 00:57:54.628863 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-13 00:57:54.628869 | orchestrator | Monday 13 April 2026 00:48:15 +0000 (0:00:00.526) 0:01:47.122 ********** 2026-04-13 00:57:54.628876 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.628883 | orchestrator | 2026-04-13 00:57:54.628890 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-13 00:57:54.628897 | orchestrator | Monday 13 April 2026 00:48:16 +0000 (0:00:01.049) 0:01:48.171 ********** 2026-04-13 00:57:54.628904 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.628911 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.628917 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.628923 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.628930 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.628936 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.628942 | orchestrator | 2026-04-13 00:57:54.628948 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-13 00:57:54.628955 | orchestrator | Monday 13 April 2026 00:49:34 +0000 (0:01:17.906) 0:03:06.078 ********** 2026-04-13 00:57:54.628961 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:57:54.628968 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:57:54.628974 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:57:54.628980 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.628987 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:57:54.628993 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:57:54.628999 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:57:54.629006 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629012 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:57:54.629018 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:57:54.629025 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:57:54.629031 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629038 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:57:54.629044 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:57:54.629050 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:57:54.629056 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:57:54.629063 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:57:54.629069 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:57:54.629075 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629087 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629094 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:57:54.629117 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:57:54.629125 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:57:54.629136 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629142 | orchestrator | 2026-04-13 00:57:54.629148 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-13 00:57:54.629155 | orchestrator | Monday 13 April 2026 00:49:35 +0000 (0:00:01.035) 0:03:07.114 ********** 2026-04-13 00:57:54.629161 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629167 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629174 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629180 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629187 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629193 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629199 | orchestrator | 2026-04-13 00:57:54.629205 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-13 00:57:54.629212 | orchestrator | Monday 13 April 2026 00:49:36 +0000 (0:00:01.012) 0:03:08.126 ********** 2026-04-13 00:57:54.629218 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629224 | orchestrator | 2026-04-13 00:57:54.629231 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-13 00:57:54.629237 | orchestrator | Monday 13 April 2026 00:49:37 +0000 (0:00:00.241) 0:03:08.368 ********** 2026-04-13 00:57:54.629243 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629250 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629256 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629262 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629268 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629275 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629281 | orchestrator | 2026-04-13 00:57:54.629287 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-13 00:57:54.629294 | orchestrator | Monday 13 April 2026 00:49:38 +0000 (0:00:01.081) 0:03:09.449 ********** 2026-04-13 00:57:54.629300 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629306 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629313 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629319 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629325 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629331 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629338 | orchestrator | 2026-04-13 00:57:54.629360 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-13 00:57:54.629372 | orchestrator | Monday 13 April 2026 00:49:39 +0000 (0:00:01.431) 0:03:10.881 ********** 2026-04-13 00:57:54.629384 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629395 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629406 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629413 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629419 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629426 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629432 | orchestrator | 2026-04-13 00:57:54.629438 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-13 00:57:54.629445 | orchestrator | Monday 13 April 2026 00:49:40 +0000 (0:00:00.990) 0:03:11.872 ********** 2026-04-13 00:57:54.629451 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.629457 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.629463 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.629470 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.629476 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.629482 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.629488 | orchestrator | 2026-04-13 00:57:54.629495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-13 00:57:54.629501 | orchestrator | Monday 13 April 2026 00:49:43 +0000 (0:00:02.930) 0:03:14.802 ********** 2026-04-13 00:57:54.629508 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.629514 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.629520 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.629526 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.629537 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.629543 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.629550 | orchestrator | 2026-04-13 00:57:54.629556 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-13 00:57:54.629562 | orchestrator | Monday 13 April 2026 00:49:44 +0000 (0:00:00.687) 0:03:15.490 ********** 2026-04-13 00:57:54.629569 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.629575 | orchestrator | 2026-04-13 00:57:54.629582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-13 00:57:54.629588 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:01.207) 0:03:16.698 ********** 2026-04-13 00:57:54.629595 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629605 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629615 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629625 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629635 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629645 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629655 | orchestrator | 2026-04-13 00:57:54.629665 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-13 00:57:54.629676 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.722) 0:03:17.420 ********** 2026-04-13 00:57:54.629686 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629697 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629707 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629717 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629727 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629738 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629749 | orchestrator | 2026-04-13 00:57:54.629760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-13 00:57:54.629771 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:01.116) 0:03:18.536 ********** 2026-04-13 00:57:54.629787 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629798 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629810 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629817 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629823 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629858 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629866 | orchestrator | 2026-04-13 00:57:54.629872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-13 00:57:54.629878 | orchestrator | Monday 13 April 2026 00:49:48 +0000 (0:00:00.867) 0:03:19.406 ********** 2026-04-13 00:57:54.629885 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629891 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629897 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629903 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629910 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629916 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629922 | orchestrator | 2026-04-13 00:57:54.629928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-13 00:57:54.629935 | orchestrator | Monday 13 April 2026 00:49:49 +0000 (0:00:01.193) 0:03:20.600 ********** 2026-04-13 00:57:54.629941 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.629948 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.629954 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.629960 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.629967 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.629973 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.629979 | orchestrator | 2026-04-13 00:57:54.629986 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-13 00:57:54.629992 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:00.772) 0:03:21.373 ********** 2026-04-13 00:57:54.629998 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.630011 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.630041 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.630047 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.630053 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.630060 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.630066 | orchestrator | 2026-04-13 00:57:54.630073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-13 00:57:54.630079 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.968) 0:03:22.341 ********** 2026-04-13 00:57:54.630085 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.630092 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.630098 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.630104 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.630111 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.630117 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.630123 | orchestrator | 2026-04-13 00:57:54.630130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-13 00:57:54.630136 | orchestrator | Monday 13 April 2026 00:49:51 +0000 (0:00:00.633) 0:03:22.975 ********** 2026-04-13 00:57:54.630142 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.630149 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.630155 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.630161 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.630167 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.630174 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.630180 | orchestrator | 2026-04-13 00:57:54.630186 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-13 00:57:54.630193 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:00.828) 0:03:23.803 ********** 2026-04-13 00:57:54.630199 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.630205 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.630212 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.630218 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.630224 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.630230 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.630237 | orchestrator | 2026-04-13 00:57:54.630243 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-13 00:57:54.630249 | orchestrator | Monday 13 April 2026 00:49:53 +0000 (0:00:01.362) 0:03:25.166 ********** 2026-04-13 00:57:54.630256 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.630263 | orchestrator | 2026-04-13 00:57:54.630270 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-13 00:57:54.630281 | orchestrator | Monday 13 April 2026 00:49:55 +0000 (0:00:01.379) 0:03:26.545 ********** 2026-04-13 00:57:54.630293 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-13 00:57:54.630304 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-13 00:57:54.630311 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-13 00:57:54.630317 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-13 00:57:54.630323 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-13 00:57:54.630329 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-13 00:57:54.630336 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-13 00:57:54.630378 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-13 00:57:54.630388 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-13 00:57:54.630394 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-13 00:57:54.630403 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-13 00:57:54.630414 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-13 00:57:54.630437 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-13 00:57:54.630449 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-13 00:57:54.630461 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-13 00:57:54.630468 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-13 00:57:54.630474 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-13 00:57:54.630485 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-13 00:57:54.630492 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-13 00:57:54.630498 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-13 00:57:54.630532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-13 00:57:54.630539 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-13 00:57:54.630546 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-13 00:57:54.630552 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-13 00:57:54.630558 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-13 00:57:54.630564 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-13 00:57:54.630571 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:57:54.630577 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-13 00:57:54.630583 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-13 00:57:54.630590 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-13 00:57:54.630596 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:57:54.630602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-13 00:57:54.630609 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-13 00:57:54.630615 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-13 00:57:54.630621 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-13 00:57:54.630627 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-13 00:57:54.630634 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:57:54.630640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-13 00:57:54.630646 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-13 00:57:54.630653 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-13 00:57:54.630659 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:57:54.630665 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:57:54.630671 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-13 00:57:54.630678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-13 00:57:54.630683 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-13 00:57:54.630689 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:57:54.630694 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-13 00:57:54.630700 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:57:54.630705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:57:54.630710 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-13 00:57:54.630716 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:57:54.630721 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:57:54.630727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:57:54.630732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:57:54.630738 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:57:54.630748 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:57:54.630754 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:57:54.630759 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:57:54.630765 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:57:54.630770 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:57:54.630776 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:57:54.630781 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:57:54.630787 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:57:54.630792 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:57:54.630798 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:57:54.630803 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:57:54.630809 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:57:54.630814 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:57:54.630820 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-13 00:57:54.630825 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:57:54.630831 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:57:54.630836 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:57:54.630842 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:57:54.630847 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:57:54.630853 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:57:54.630861 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:57:54.630867 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-13 00:57:54.630873 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:57:54.630896 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:57:54.630902 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:57:54.630908 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:57:54.630914 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:57:54.630919 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:57:54.630925 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-13 00:57:54.630930 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-13 00:57:54.630936 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:57:54.630941 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:57:54.630949 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:57:54.630959 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-13 00:57:54.630969 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-13 00:57:54.630979 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-13 00:57:54.630989 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-13 00:57:54.630999 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-13 00:57:54.631009 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-13 00:57:54.631018 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-13 00:57:54.631034 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-13 00:57:54.631044 | orchestrator | 2026-04-13 00:57:54.631054 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-13 00:57:54.631065 | orchestrator | Monday 13 April 2026 00:50:02 +0000 (0:00:06.985) 0:03:33.531 ********** 2026-04-13 00:57:54.631075 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631086 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631093 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631099 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.631105 | orchestrator | 2026-04-13 00:57:54.631111 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-13 00:57:54.631116 | orchestrator | Monday 13 April 2026 00:50:03 +0000 (0:00:01.149) 0:03:34.680 ********** 2026-04-13 00:57:54.631122 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631128 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631133 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631139 | orchestrator | 2026-04-13 00:57:54.631145 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-13 00:57:54.631150 | orchestrator | Monday 13 April 2026 00:50:04 +0000 (0:00:00.776) 0:03:35.457 ********** 2026-04-13 00:57:54.631156 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631161 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631167 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631172 | orchestrator | 2026-04-13 00:57:54.631178 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-13 00:57:54.631183 | orchestrator | Monday 13 April 2026 00:50:05 +0000 (0:00:01.557) 0:03:37.014 ********** 2026-04-13 00:57:54.631189 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631195 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631200 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631206 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.631211 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.631217 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.631222 | orchestrator | 2026-04-13 00:57:54.631228 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-13 00:57:54.631233 | orchestrator | Monday 13 April 2026 00:50:06 +0000 (0:00:00.782) 0:03:37.797 ********** 2026-04-13 00:57:54.631239 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631244 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631250 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631255 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.631261 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.631266 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.631272 | orchestrator | 2026-04-13 00:57:54.631278 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-13 00:57:54.631283 | orchestrator | Monday 13 April 2026 00:50:07 +0000 (0:00:01.048) 0:03:38.846 ********** 2026-04-13 00:57:54.631288 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631294 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631300 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631305 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631311 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631321 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631332 | orchestrator | 2026-04-13 00:57:54.631338 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-13 00:57:54.631355 | orchestrator | Monday 13 April 2026 00:50:08 +0000 (0:00:00.749) 0:03:39.595 ********** 2026-04-13 00:57:54.631387 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631393 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631399 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631404 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631410 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631415 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631421 | orchestrator | 2026-04-13 00:57:54.631426 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-13 00:57:54.631432 | orchestrator | Monday 13 April 2026 00:50:09 +0000 (0:00:00.935) 0:03:40.530 ********** 2026-04-13 00:57:54.631437 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631443 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631448 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631454 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631459 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631465 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631470 | orchestrator | 2026-04-13 00:57:54.631476 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-13 00:57:54.631481 | orchestrator | Monday 13 April 2026 00:50:10 +0000 (0:00:00.735) 0:03:41.266 ********** 2026-04-13 00:57:54.631487 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631492 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631498 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631503 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631509 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631514 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631520 | orchestrator | 2026-04-13 00:57:54.631525 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-13 00:57:54.631531 | orchestrator | Monday 13 April 2026 00:50:11 +0000 (0:00:01.380) 0:03:42.647 ********** 2026-04-13 00:57:54.631537 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631542 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631548 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631553 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631559 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631564 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631570 | orchestrator | 2026-04-13 00:57:54.631575 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-13 00:57:54.631581 | orchestrator | Monday 13 April 2026 00:50:12 +0000 (0:00:00.773) 0:03:43.421 ********** 2026-04-13 00:57:54.631587 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631592 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631597 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631603 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631608 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631614 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631619 | orchestrator | 2026-04-13 00:57:54.631625 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-13 00:57:54.631631 | orchestrator | Monday 13 April 2026 00:50:12 +0000 (0:00:00.661) 0:03:44.082 ********** 2026-04-13 00:57:54.631636 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631642 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631647 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631653 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.631658 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.631664 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.631669 | orchestrator | 2026-04-13 00:57:54.631675 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-13 00:57:54.631685 | orchestrator | Monday 13 April 2026 00:50:15 +0000 (0:00:02.918) 0:03:47.001 ********** 2026-04-13 00:57:54.631690 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631696 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631701 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631707 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.631712 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.631718 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.631723 | orchestrator | 2026-04-13 00:57:54.631729 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-13 00:57:54.631735 | orchestrator | Monday 13 April 2026 00:50:16 +0000 (0:00:00.771) 0:03:47.772 ********** 2026-04-13 00:57:54.631740 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631746 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631751 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631757 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.631762 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.631768 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.631773 | orchestrator | 2026-04-13 00:57:54.631779 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-13 00:57:54.631785 | orchestrator | Monday 13 April 2026 00:50:17 +0000 (0:00:01.252) 0:03:49.025 ********** 2026-04-13 00:57:54.631790 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631796 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631801 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631806 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631812 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631817 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631823 | orchestrator | 2026-04-13 00:57:54.631828 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-13 00:57:54.631834 | orchestrator | Monday 13 April 2026 00:50:18 +0000 (0:00:00.750) 0:03:49.776 ********** 2026-04-13 00:57:54.631839 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631845 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631851 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631856 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631865 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631870 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.631876 | orchestrator | 2026-04-13 00:57:54.631898 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-13 00:57:54.631905 | orchestrator | Monday 13 April 2026 00:50:19 +0000 (0:00:00.995) 0:03:50.771 ********** 2026-04-13 00:57:54.631910 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.631916 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.631921 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.631928 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-13 00:57:54.631936 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-13 00:57:54.631942 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.631948 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-13 00:57:54.631957 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-13 00:57:54.631963 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.631968 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-13 00:57:54.631974 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-13 00:57:54.631980 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.631985 | orchestrator | 2026-04-13 00:57:54.631991 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-13 00:57:54.631997 | orchestrator | Monday 13 April 2026 00:50:20 +0000 (0:00:00.976) 0:03:51.747 ********** 2026-04-13 00:57:54.632002 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632008 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632013 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632019 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.632024 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.632030 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.632035 | orchestrator | 2026-04-13 00:57:54.632041 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-13 00:57:54.632046 | orchestrator | Monday 13 April 2026 00:50:21 +0000 (0:00:01.056) 0:03:52.804 ********** 2026-04-13 00:57:54.632052 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632057 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632063 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632069 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.632074 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.632079 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.632085 | orchestrator | 2026-04-13 00:57:54.632090 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-13 00:57:54.632096 | orchestrator | Monday 13 April 2026 00:50:22 +0000 (0:00:00.694) 0:03:53.498 ********** 2026-04-13 00:57:54.632101 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632107 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632113 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632118 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.632123 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.632129 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.632134 | orchestrator | 2026-04-13 00:57:54.632140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-13 00:57:54.632145 | orchestrator | Monday 13 April 2026 00:50:23 +0000 (0:00:00.964) 0:03:54.463 ********** 2026-04-13 00:57:54.632151 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632156 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632162 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632167 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.632173 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.632178 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.632184 | orchestrator | 2026-04-13 00:57:54.632195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-13 00:57:54.632201 | orchestrator | Monday 13 April 2026 00:50:23 +0000 (0:00:00.664) 0:03:55.127 ********** 2026-04-13 00:57:54.632206 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632229 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632236 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632241 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.632247 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.632252 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.632258 | orchestrator | 2026-04-13 00:57:54.632263 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-13 00:57:54.632269 | orchestrator | Monday 13 April 2026 00:50:24 +0000 (0:00:01.007) 0:03:56.135 ********** 2026-04-13 00:57:54.632274 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632280 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632286 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632291 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.632297 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.632302 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.632308 | orchestrator | 2026-04-13 00:57:54.632313 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-13 00:57:54.632319 | orchestrator | Monday 13 April 2026 00:50:25 +0000 (0:00:00.715) 0:03:56.851 ********** 2026-04-13 00:57:54.632324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-13 00:57:54.632330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-13 00:57:54.632368 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-13 00:57:54.632374 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632380 | orchestrator | 2026-04-13 00:57:54.632386 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-13 00:57:54.632391 | orchestrator | Monday 13 April 2026 00:50:26 +0000 (0:00:00.573) 0:03:57.425 ********** 2026-04-13 00:57:54.632397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-13 00:57:54.632402 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-13 00:57:54.632408 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-13 00:57:54.632413 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632419 | orchestrator | 2026-04-13 00:57:54.632424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-13 00:57:54.632430 | orchestrator | Monday 13 April 2026 00:50:26 +0000 (0:00:00.771) 0:03:58.197 ********** 2026-04-13 00:57:54.632435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-13 00:57:54.632441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-13 00:57:54.632446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-13 00:57:54.632452 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632457 | orchestrator | 2026-04-13 00:57:54.632463 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-13 00:57:54.632468 | orchestrator | Monday 13 April 2026 00:50:27 +0000 (0:00:00.358) 0:03:58.555 ********** 2026-04-13 00:57:54.632474 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632479 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632485 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632490 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.632496 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.632501 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.632507 | orchestrator | 2026-04-13 00:57:54.632512 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-13 00:57:54.632518 | orchestrator | Monday 13 April 2026 00:50:27 +0000 (0:00:00.602) 0:03:59.158 ********** 2026-04-13 00:57:54.632523 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-13 00:57:54.632529 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632538 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-13 00:57:54.632544 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632549 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-13 00:57:54.632555 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632560 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-13 00:57:54.632566 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 00:57:54.632571 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-13 00:57:54.632577 | orchestrator | 2026-04-13 00:57:54.632582 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-13 00:57:54.632588 | orchestrator | Monday 13 April 2026 00:50:30 +0000 (0:00:02.144) 0:04:01.302 ********** 2026-04-13 00:57:54.632593 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.632599 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.632604 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.632610 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.632615 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.632621 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.632626 | orchestrator | 2026-04-13 00:57:54.632632 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:57:54.632637 | orchestrator | Monday 13 April 2026 00:50:33 +0000 (0:00:03.239) 0:04:04.542 ********** 2026-04-13 00:57:54.632643 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.632648 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.632654 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.632659 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.632665 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.632670 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.632676 | orchestrator | 2026-04-13 00:57:54.632681 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-13 00:57:54.632687 | orchestrator | Monday 13 April 2026 00:50:34 +0000 (0:00:01.221) 0:04:05.764 ********** 2026-04-13 00:57:54.632692 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.632697 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.632703 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.632709 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.632715 | orchestrator | 2026-04-13 00:57:54.632720 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-13 00:57:54.632736 | orchestrator | Monday 13 April 2026 00:50:35 +0000 (0:00:01.120) 0:04:06.885 ********** 2026-04-13 00:57:54.632742 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.632748 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.632753 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.632778 | orchestrator | 2026-04-13 00:57:54.632784 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-13 00:57:54.632790 | orchestrator | Monday 13 April 2026 00:50:35 +0000 (0:00:00.351) 0:04:07.236 ********** 2026-04-13 00:57:54.632796 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.632801 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.632807 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.632812 | orchestrator | 2026-04-13 00:57:54.632818 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-13 00:57:54.632823 | orchestrator | Monday 13 April 2026 00:50:37 +0000 (0:00:01.265) 0:04:08.502 ********** 2026-04-13 00:57:54.632829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:57:54.632834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:57:54.632840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:57:54.632845 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632851 | orchestrator | 2026-04-13 00:57:54.632856 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-13 00:57:54.632862 | orchestrator | Monday 13 April 2026 00:50:38 +0000 (0:00:00.933) 0:04:09.435 ********** 2026-04-13 00:57:54.632873 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.632878 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.632885 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.632895 | orchestrator | 2026-04-13 00:57:54.632904 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-13 00:57:54.632912 | orchestrator | Monday 13 April 2026 00:50:38 +0000 (0:00:00.627) 0:04:10.063 ********** 2026-04-13 00:57:54.632922 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.632931 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.632940 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.632949 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.632958 | orchestrator | 2026-04-13 00:57:54.632967 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-13 00:57:54.632973 | orchestrator | Monday 13 April 2026 00:50:40 +0000 (0:00:01.305) 0:04:11.368 ********** 2026-04-13 00:57:54.632979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.632985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.632990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.632995 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633001 | orchestrator | 2026-04-13 00:57:54.633006 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-13 00:57:54.633012 | orchestrator | Monday 13 April 2026 00:50:40 +0000 (0:00:00.814) 0:04:12.183 ********** 2026-04-13 00:57:54.633017 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633023 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.633028 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.633034 | orchestrator | 2026-04-13 00:57:54.633039 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-13 00:57:54.633045 | orchestrator | Monday 13 April 2026 00:50:41 +0000 (0:00:00.880) 0:04:13.063 ********** 2026-04-13 00:57:54.633050 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633056 | orchestrator | 2026-04-13 00:57:54.633061 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-13 00:57:54.633067 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:00.249) 0:04:13.312 ********** 2026-04-13 00:57:54.633072 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633078 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.633083 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.633089 | orchestrator | 2026-04-13 00:57:54.633094 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-13 00:57:54.633100 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:00.342) 0:04:13.655 ********** 2026-04-13 00:57:54.633105 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633111 | orchestrator | 2026-04-13 00:57:54.633116 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-13 00:57:54.633122 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:00.239) 0:04:13.894 ********** 2026-04-13 00:57:54.633127 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633133 | orchestrator | 2026-04-13 00:57:54.633138 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-13 00:57:54.633144 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:00.239) 0:04:14.133 ********** 2026-04-13 00:57:54.633149 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633155 | orchestrator | 2026-04-13 00:57:54.633160 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-13 00:57:54.633166 | orchestrator | Monday 13 April 2026 00:50:43 +0000 (0:00:00.137) 0:04:14.271 ********** 2026-04-13 00:57:54.633171 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633177 | orchestrator | 2026-04-13 00:57:54.633182 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-13 00:57:54.633188 | orchestrator | Monday 13 April 2026 00:50:43 +0000 (0:00:00.359) 0:04:14.630 ********** 2026-04-13 00:57:54.633198 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633203 | orchestrator | 2026-04-13 00:57:54.633209 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-13 00:57:54.633214 | orchestrator | Monday 13 April 2026 00:50:43 +0000 (0:00:00.272) 0:04:14.902 ********** 2026-04-13 00:57:54.633220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.633225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.633231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.633236 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633242 | orchestrator | 2026-04-13 00:57:54.633251 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-13 00:57:54.633257 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:00.704) 0:04:15.607 ********** 2026-04-13 00:57:54.633263 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633289 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.633296 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.633301 | orchestrator | 2026-04-13 00:57:54.633307 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-13 00:57:54.633312 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:00.762) 0:04:16.369 ********** 2026-04-13 00:57:54.633318 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633324 | orchestrator | 2026-04-13 00:57:54.633329 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-13 00:57:54.633335 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:00.307) 0:04:16.677 ********** 2026-04-13 00:57:54.633340 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633361 | orchestrator | 2026-04-13 00:57:54.633371 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-13 00:57:54.633380 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:00.235) 0:04:16.912 ********** 2026-04-13 00:57:54.633389 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.633399 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.633408 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.633415 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.633420 | orchestrator | 2026-04-13 00:57:54.633426 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-13 00:57:54.633431 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:01.075) 0:04:17.988 ********** 2026-04-13 00:57:54.633437 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.633443 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.633448 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.633454 | orchestrator | 2026-04-13 00:57:54.633459 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-13 00:57:54.633465 | orchestrator | Monday 13 April 2026 00:50:47 +0000 (0:00:00.403) 0:04:18.391 ********** 2026-04-13 00:57:54.633470 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.633476 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.633481 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.633487 | orchestrator | 2026-04-13 00:57:54.633492 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-13 00:57:54.633498 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:01.410) 0:04:19.801 ********** 2026-04-13 00:57:54.633503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.633509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.633514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.633520 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633525 | orchestrator | 2026-04-13 00:57:54.633531 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-13 00:57:54.633536 | orchestrator | Monday 13 April 2026 00:50:49 +0000 (0:00:00.775) 0:04:20.577 ********** 2026-04-13 00:57:54.633547 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.633553 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.633558 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.633564 | orchestrator | 2026-04-13 00:57:54.633569 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-13 00:57:54.633575 | orchestrator | Monday 13 April 2026 00:50:49 +0000 (0:00:00.310) 0:04:20.887 ********** 2026-04-13 00:57:54.633580 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.633586 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.633591 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.633597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.633602 | orchestrator | 2026-04-13 00:57:54.633608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-13 00:57:54.633613 | orchestrator | Monday 13 April 2026 00:50:50 +0000 (0:00:00.977) 0:04:21.865 ********** 2026-04-13 00:57:54.633619 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.633624 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.633630 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.633635 | orchestrator | 2026-04-13 00:57:54.633641 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-13 00:57:54.633647 | orchestrator | Monday 13 April 2026 00:50:50 +0000 (0:00:00.376) 0:04:22.242 ********** 2026-04-13 00:57:54.633652 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.633658 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.633663 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.633669 | orchestrator | 2026-04-13 00:57:54.633674 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-13 00:57:54.633680 | orchestrator | Monday 13 April 2026 00:50:52 +0000 (0:00:01.707) 0:04:23.949 ********** 2026-04-13 00:57:54.633685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.633691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.633696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.633702 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633707 | orchestrator | 2026-04-13 00:57:54.633713 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-13 00:57:54.633718 | orchestrator | Monday 13 April 2026 00:50:53 +0000 (0:00:00.662) 0:04:24.612 ********** 2026-04-13 00:57:54.633724 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.633729 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.633735 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.633740 | orchestrator | 2026-04-13 00:57:54.633746 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-13 00:57:54.633752 | orchestrator | Monday 13 April 2026 00:50:53 +0000 (0:00:00.418) 0:04:25.031 ********** 2026-04-13 00:57:54.633757 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.633763 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.633768 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.633777 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633782 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.633788 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.633793 | orchestrator | 2026-04-13 00:57:54.633817 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-13 00:57:54.633823 | orchestrator | Monday 13 April 2026 00:50:54 +0000 (0:00:00.823) 0:04:25.854 ********** 2026-04-13 00:57:54.633829 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.633834 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.633840 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.633845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.633851 | orchestrator | 2026-04-13 00:57:54.633857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-13 00:57:54.633866 | orchestrator | Monday 13 April 2026 00:50:55 +0000 (0:00:01.321) 0:04:27.176 ********** 2026-04-13 00:57:54.633871 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.633877 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.633883 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.633888 | orchestrator | 2026-04-13 00:57:54.633894 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-13 00:57:54.633899 | orchestrator | Monday 13 April 2026 00:50:56 +0000 (0:00:00.461) 0:04:27.638 ********** 2026-04-13 00:57:54.633905 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.633910 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.633916 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.633921 | orchestrator | 2026-04-13 00:57:54.633927 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-13 00:57:54.633933 | orchestrator | Monday 13 April 2026 00:50:58 +0000 (0:00:01.767) 0:04:29.405 ********** 2026-04-13 00:57:54.633938 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:57:54.633944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:57:54.633949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:57:54.633955 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.633960 | orchestrator | 2026-04-13 00:57:54.633966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-13 00:57:54.633972 | orchestrator | Monday 13 April 2026 00:50:58 +0000 (0:00:00.630) 0:04:30.036 ********** 2026-04-13 00:57:54.633977 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.633983 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.633988 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.633994 | orchestrator | 2026-04-13 00:57:54.633999 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-13 00:57:54.634005 | orchestrator | 2026-04-13 00:57:54.634010 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:57:54.634049 | orchestrator | Monday 13 April 2026 00:50:59 +0000 (0:00:00.712) 0:04:30.748 ********** 2026-04-13 00:57:54.634056 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.634062 | orchestrator | 2026-04-13 00:57:54.634067 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:57:54.634073 | orchestrator | Monday 13 April 2026 00:51:00 +0000 (0:00:01.478) 0:04:32.227 ********** 2026-04-13 00:57:54.634079 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.634084 | orchestrator | 2026-04-13 00:57:54.634090 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:57:54.634095 | orchestrator | Monday 13 April 2026 00:51:01 +0000 (0:00:00.895) 0:04:33.123 ********** 2026-04-13 00:57:54.634101 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634107 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634112 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634118 | orchestrator | 2026-04-13 00:57:54.634123 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:57:54.634129 | orchestrator | Monday 13 April 2026 00:51:03 +0000 (0:00:01.398) 0:04:34.525 ********** 2026-04-13 00:57:54.634135 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634140 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634146 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634151 | orchestrator | 2026-04-13 00:57:54.634157 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:57:54.634163 | orchestrator | Monday 13 April 2026 00:51:03 +0000 (0:00:00.664) 0:04:35.190 ********** 2026-04-13 00:57:54.634168 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634174 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634179 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634188 | orchestrator | 2026-04-13 00:57:54.634194 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:57:54.634199 | orchestrator | Monday 13 April 2026 00:51:04 +0000 (0:00:00.537) 0:04:35.727 ********** 2026-04-13 00:57:54.634205 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634211 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634216 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634222 | orchestrator | 2026-04-13 00:57:54.634227 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:57:54.634233 | orchestrator | Monday 13 April 2026 00:51:04 +0000 (0:00:00.346) 0:04:36.073 ********** 2026-04-13 00:57:54.634238 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634244 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634250 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634255 | orchestrator | 2026-04-13 00:57:54.634261 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:57:54.634266 | orchestrator | Monday 13 April 2026 00:51:05 +0000 (0:00:00.950) 0:04:37.024 ********** 2026-04-13 00:57:54.634272 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634277 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634283 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634288 | orchestrator | 2026-04-13 00:57:54.634294 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:57:54.634302 | orchestrator | Monday 13 April 2026 00:51:06 +0000 (0:00:00.595) 0:04:37.619 ********** 2026-04-13 00:57:54.634308 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634313 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634319 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634325 | orchestrator | 2026-04-13 00:57:54.634381 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:57:54.634390 | orchestrator | Monday 13 April 2026 00:51:06 +0000 (0:00:00.414) 0:04:38.033 ********** 2026-04-13 00:57:54.634396 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634401 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634407 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634412 | orchestrator | 2026-04-13 00:57:54.634418 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:57:54.634423 | orchestrator | Monday 13 April 2026 00:51:07 +0000 (0:00:00.872) 0:04:38.906 ********** 2026-04-13 00:57:54.634429 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634434 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634439 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634445 | orchestrator | 2026-04-13 00:57:54.634450 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:57:54.634456 | orchestrator | Monday 13 April 2026 00:51:08 +0000 (0:00:00.907) 0:04:39.814 ********** 2026-04-13 00:57:54.634461 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634467 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634472 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634478 | orchestrator | 2026-04-13 00:57:54.634483 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:57:54.634489 | orchestrator | Monday 13 April 2026 00:51:09 +0000 (0:00:00.504) 0:04:40.318 ********** 2026-04-13 00:57:54.634494 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634500 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634505 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634511 | orchestrator | 2026-04-13 00:57:54.634516 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:57:54.634522 | orchestrator | Monday 13 April 2026 00:51:09 +0000 (0:00:00.502) 0:04:40.820 ********** 2026-04-13 00:57:54.634527 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634533 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634538 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634544 | orchestrator | 2026-04-13 00:57:54.634549 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:57:54.634559 | orchestrator | Monday 13 April 2026 00:51:09 +0000 (0:00:00.320) 0:04:41.140 ********** 2026-04-13 00:57:54.634564 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634570 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634575 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634581 | orchestrator | 2026-04-13 00:57:54.634586 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:57:54.634592 | orchestrator | Monday 13 April 2026 00:51:10 +0000 (0:00:00.386) 0:04:41.527 ********** 2026-04-13 00:57:54.634597 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634603 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634608 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634614 | orchestrator | 2026-04-13 00:57:54.634619 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:57:54.634625 | orchestrator | Monday 13 April 2026 00:51:10 +0000 (0:00:00.399) 0:04:41.926 ********** 2026-04-13 00:57:54.634630 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634636 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634641 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634647 | orchestrator | 2026-04-13 00:57:54.634652 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:57:54.634658 | orchestrator | Monday 13 April 2026 00:51:11 +0000 (0:00:00.701) 0:04:42.628 ********** 2026-04-13 00:57:54.634664 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634669 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.634675 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.634680 | orchestrator | 2026-04-13 00:57:54.634686 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:57:54.634691 | orchestrator | Monday 13 April 2026 00:51:11 +0000 (0:00:00.463) 0:04:43.091 ********** 2026-04-13 00:57:54.634697 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634702 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634708 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634713 | orchestrator | 2026-04-13 00:57:54.634719 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:57:54.634724 | orchestrator | Monday 13 April 2026 00:51:12 +0000 (0:00:00.372) 0:04:43.464 ********** 2026-04-13 00:57:54.634730 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634735 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634741 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634746 | orchestrator | 2026-04-13 00:57:54.634752 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:57:54.634757 | orchestrator | Monday 13 April 2026 00:51:12 +0000 (0:00:00.365) 0:04:43.829 ********** 2026-04-13 00:57:54.634763 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634768 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634774 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634779 | orchestrator | 2026-04-13 00:57:54.634785 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-13 00:57:54.634790 | orchestrator | Monday 13 April 2026 00:51:13 +0000 (0:00:00.877) 0:04:44.706 ********** 2026-04-13 00:57:54.634796 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634801 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634807 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634812 | orchestrator | 2026-04-13 00:57:54.634818 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-13 00:57:54.634823 | orchestrator | Monday 13 April 2026 00:51:13 +0000 (0:00:00.455) 0:04:45.162 ********** 2026-04-13 00:57:54.634829 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.634834 | orchestrator | 2026-04-13 00:57:54.634840 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-13 00:57:54.634846 | orchestrator | Monday 13 April 2026 00:51:14 +0000 (0:00:01.038) 0:04:46.200 ********** 2026-04-13 00:57:54.634858 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.634863 | orchestrator | 2026-04-13 00:57:54.634869 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-13 00:57:54.634890 | orchestrator | Monday 13 April 2026 00:51:15 +0000 (0:00:00.144) 0:04:46.345 ********** 2026-04-13 00:57:54.634897 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:57:54.634902 | orchestrator | 2026-04-13 00:57:54.634908 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-13 00:57:54.634913 | orchestrator | Monday 13 April 2026 00:51:16 +0000 (0:00:01.167) 0:04:47.513 ********** 2026-04-13 00:57:54.634919 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634924 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634930 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634935 | orchestrator | 2026-04-13 00:57:54.634940 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-13 00:57:54.634945 | orchestrator | Monday 13 April 2026 00:51:16 +0000 (0:00:00.453) 0:04:47.967 ********** 2026-04-13 00:57:54.634950 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.634955 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.634960 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.634965 | orchestrator | 2026-04-13 00:57:54.634970 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-13 00:57:54.634974 | orchestrator | Monday 13 April 2026 00:51:17 +0000 (0:00:00.587) 0:04:48.554 ********** 2026-04-13 00:57:54.634979 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.634984 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.634989 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.634994 | orchestrator | 2026-04-13 00:57:54.634999 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-13 00:57:54.635004 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:01.283) 0:04:49.838 ********** 2026-04-13 00:57:54.635009 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635014 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635018 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635023 | orchestrator | 2026-04-13 00:57:54.635028 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-13 00:57:54.635033 | orchestrator | Monday 13 April 2026 00:51:20 +0000 (0:00:01.594) 0:04:51.432 ********** 2026-04-13 00:57:54.635038 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635043 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635048 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635052 | orchestrator | 2026-04-13 00:57:54.635057 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-13 00:57:54.635065 | orchestrator | Monday 13 April 2026 00:51:21 +0000 (0:00:01.038) 0:04:52.471 ********** 2026-04-13 00:57:54.635073 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.635081 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.635090 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.635097 | orchestrator | 2026-04-13 00:57:54.635105 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-13 00:57:54.635114 | orchestrator | Monday 13 April 2026 00:51:21 +0000 (0:00:00.677) 0:04:53.149 ********** 2026-04-13 00:57:54.635122 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635130 | orchestrator | 2026-04-13 00:57:54.635139 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-13 00:57:54.635148 | orchestrator | Monday 13 April 2026 00:51:23 +0000 (0:00:01.811) 0:04:54.960 ********** 2026-04-13 00:57:54.635156 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.635164 | orchestrator | 2026-04-13 00:57:54.635172 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-13 00:57:54.635181 | orchestrator | Monday 13 April 2026 00:51:24 +0000 (0:00:00.837) 0:04:55.797 ********** 2026-04-13 00:57:54.635190 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.635198 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-13 00:57:54.635216 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.635225 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-13 00:57:54.635233 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:57:54.635242 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:57:54.635250 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-13 00:57:54.635259 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-04-13 00:57:54.635264 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:57:54.635269 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-04-13 00:57:54.635274 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:57:54.635279 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-13 00:57:54.635284 | orchestrator | 2026-04-13 00:57:54.635289 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-13 00:57:54.635294 | orchestrator | Monday 13 April 2026 00:51:28 +0000 (0:00:03.602) 0:04:59.400 ********** 2026-04-13 00:57:54.635299 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635304 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635309 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635314 | orchestrator | 2026-04-13 00:57:54.635319 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-13 00:57:54.635323 | orchestrator | Monday 13 April 2026 00:51:29 +0000 (0:00:01.675) 0:05:01.076 ********** 2026-04-13 00:57:54.635328 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.635333 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.635338 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.635354 | orchestrator | 2026-04-13 00:57:54.635360 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-13 00:57:54.635365 | orchestrator | Monday 13 April 2026 00:51:30 +0000 (0:00:00.379) 0:05:01.456 ********** 2026-04-13 00:57:54.635370 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.635375 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.635380 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.635385 | orchestrator | 2026-04-13 00:57:54.635390 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-13 00:57:54.635398 | orchestrator | Monday 13 April 2026 00:51:30 +0000 (0:00:00.411) 0:05:01.867 ********** 2026-04-13 00:57:54.635403 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635408 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635413 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635418 | orchestrator | 2026-04-13 00:57:54.635447 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-13 00:57:54.635452 | orchestrator | Monday 13 April 2026 00:51:33 +0000 (0:00:02.618) 0:05:04.486 ********** 2026-04-13 00:57:54.635457 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635462 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635467 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635472 | orchestrator | 2026-04-13 00:57:54.635477 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-13 00:57:54.635482 | orchestrator | Monday 13 April 2026 00:51:34 +0000 (0:00:01.394) 0:05:05.881 ********** 2026-04-13 00:57:54.635487 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.635492 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.635497 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.635502 | orchestrator | 2026-04-13 00:57:54.635507 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-13 00:57:54.635512 | orchestrator | Monday 13 April 2026 00:51:34 +0000 (0:00:00.345) 0:05:06.226 ********** 2026-04-13 00:57:54.635517 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.635522 | orchestrator | 2026-04-13 00:57:54.635527 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-13 00:57:54.635536 | orchestrator | Monday 13 April 2026 00:51:35 +0000 (0:00:00.553) 0:05:06.779 ********** 2026-04-13 00:57:54.635541 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.635546 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.635551 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.635556 | orchestrator | 2026-04-13 00:57:54.635561 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-13 00:57:54.635566 | orchestrator | Monday 13 April 2026 00:51:36 +0000 (0:00:00.578) 0:05:07.358 ********** 2026-04-13 00:57:54.635571 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.635576 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.635581 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.635586 | orchestrator | 2026-04-13 00:57:54.635591 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-13 00:57:54.635596 | orchestrator | Monday 13 April 2026 00:51:36 +0000 (0:00:00.300) 0:05:07.658 ********** 2026-04-13 00:57:54.635601 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.635606 | orchestrator | 2026-04-13 00:57:54.635611 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-13 00:57:54.635616 | orchestrator | Monday 13 April 2026 00:51:37 +0000 (0:00:00.609) 0:05:08.267 ********** 2026-04-13 00:57:54.635621 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635626 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635630 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635635 | orchestrator | 2026-04-13 00:57:54.635640 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-13 00:57:54.635645 | orchestrator | Monday 13 April 2026 00:51:38 +0000 (0:00:01.944) 0:05:10.212 ********** 2026-04-13 00:57:54.635650 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635655 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635660 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635665 | orchestrator | 2026-04-13 00:57:54.635670 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-13 00:57:54.635675 | orchestrator | Monday 13 April 2026 00:51:40 +0000 (0:00:01.222) 0:05:11.435 ********** 2026-04-13 00:57:54.635680 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635685 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635690 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635695 | orchestrator | 2026-04-13 00:57:54.635700 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-13 00:57:54.635705 | orchestrator | Monday 13 April 2026 00:51:42 +0000 (0:00:01.917) 0:05:13.352 ********** 2026-04-13 00:57:54.635710 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.635714 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.635719 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.635724 | orchestrator | 2026-04-13 00:57:54.635729 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-13 00:57:54.635734 | orchestrator | Monday 13 April 2026 00:51:43 +0000 (0:00:01.906) 0:05:15.259 ********** 2026-04-13 00:57:54.635739 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.635744 | orchestrator | 2026-04-13 00:57:54.635749 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-13 00:57:54.635754 | orchestrator | Monday 13 April 2026 00:51:44 +0000 (0:00:00.831) 0:05:16.090 ********** 2026-04-13 00:57:54.635759 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-13 00:57:54.635764 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.635769 | orchestrator | 2026-04-13 00:57:54.635774 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-13 00:57:54.635782 | orchestrator | Monday 13 April 2026 00:52:06 +0000 (0:00:21.545) 0:05:37.636 ********** 2026-04-13 00:57:54.635796 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.635805 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.635814 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.635820 | orchestrator | 2026-04-13 00:57:54.635825 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-13 00:57:54.635830 | orchestrator | Monday 13 April 2026 00:52:12 +0000 (0:00:06.396) 0:05:44.033 ********** 2026-04-13 00:57:54.635835 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.635840 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.635847 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.635855 | orchestrator | 2026-04-13 00:57:54.635867 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-13 00:57:54.635876 | orchestrator | Monday 13 April 2026 00:52:13 +0000 (0:00:00.320) 0:05:44.353 ********** 2026-04-13 00:57:54.635908 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fcdc11dc91ae574396c3896bb2a8e3a7fb3f4bc9'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-13 00:57:54.635916 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fcdc11dc91ae574396c3896bb2a8e3a7fb3f4bc9'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-13 00:57:54.635922 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fcdc11dc91ae574396c3896bb2a8e3a7fb3f4bc9'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-13 00:57:54.635928 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fcdc11dc91ae574396c3896bb2a8e3a7fb3f4bc9'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-13 00:57:54.635934 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fcdc11dc91ae574396c3896bb2a8e3a7fb3f4bc9'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-13 00:57:54.635939 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fcdc11dc91ae574396c3896bb2a8e3a7fb3f4bc9'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fcdc11dc91ae574396c3896bb2a8e3a7fb3f4bc9'}])  2026-04-13 00:57:54.635945 | orchestrator | 2026-04-13 00:57:54.635950 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:57:54.635955 | orchestrator | Monday 13 April 2026 00:52:24 +0000 (0:00:10.911) 0:05:55.265 ********** 2026-04-13 00:57:54.635960 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.635966 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.635970 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.635976 | orchestrator | 2026-04-13 00:57:54.635981 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-13 00:57:54.635986 | orchestrator | Monday 13 April 2026 00:52:24 +0000 (0:00:00.355) 0:05:55.620 ********** 2026-04-13 00:57:54.635997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.636003 | orchestrator | 2026-04-13 00:57:54.636008 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-13 00:57:54.636013 | orchestrator | Monday 13 April 2026 00:52:25 +0000 (0:00:00.755) 0:05:56.375 ********** 2026-04-13 00:57:54.636018 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636023 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636028 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636033 | orchestrator | 2026-04-13 00:57:54.636039 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-13 00:57:54.636044 | orchestrator | Monday 13 April 2026 00:52:25 +0000 (0:00:00.347) 0:05:56.723 ********** 2026-04-13 00:57:54.636049 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636054 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636059 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636064 | orchestrator | 2026-04-13 00:57:54.636069 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-13 00:57:54.636074 | orchestrator | Monday 13 April 2026 00:52:25 +0000 (0:00:00.352) 0:05:57.076 ********** 2026-04-13 00:57:54.636079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:57:54.636084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:57:54.636089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:57:54.636094 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636099 | orchestrator | 2026-04-13 00:57:54.636105 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-13 00:57:54.636112 | orchestrator | Monday 13 April 2026 00:52:26 +0000 (0:00:00.890) 0:05:57.966 ********** 2026-04-13 00:57:54.636118 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636123 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636128 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636133 | orchestrator | 2026-04-13 00:57:54.636154 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-13 00:57:54.636160 | orchestrator | 2026-04-13 00:57:54.636165 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:57:54.636170 | orchestrator | Monday 13 April 2026 00:52:27 +0000 (0:00:00.861) 0:05:58.827 ********** 2026-04-13 00:57:54.636175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.636180 | orchestrator | 2026-04-13 00:57:54.636185 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:57:54.636190 | orchestrator | Monday 13 April 2026 00:52:28 +0000 (0:00:00.509) 0:05:59.337 ********** 2026-04-13 00:57:54.636195 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.636200 | orchestrator | 2026-04-13 00:57:54.636205 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:57:54.636210 | orchestrator | Monday 13 April 2026 00:52:28 +0000 (0:00:00.816) 0:06:00.153 ********** 2026-04-13 00:57:54.636215 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636220 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636225 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636230 | orchestrator | 2026-04-13 00:57:54.636235 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:57:54.636240 | orchestrator | Monday 13 April 2026 00:52:29 +0000 (0:00:00.801) 0:06:00.955 ********** 2026-04-13 00:57:54.636245 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636251 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636256 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636261 | orchestrator | 2026-04-13 00:57:54.636266 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:57:54.636271 | orchestrator | Monday 13 April 2026 00:52:30 +0000 (0:00:00.321) 0:06:01.277 ********** 2026-04-13 00:57:54.636280 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636285 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636290 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636295 | orchestrator | 2026-04-13 00:57:54.636300 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:57:54.636305 | orchestrator | Monday 13 April 2026 00:52:30 +0000 (0:00:00.349) 0:06:01.627 ********** 2026-04-13 00:57:54.636310 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636315 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636320 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636325 | orchestrator | 2026-04-13 00:57:54.636330 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:57:54.636335 | orchestrator | Monday 13 April 2026 00:52:30 +0000 (0:00:00.616) 0:06:02.243 ********** 2026-04-13 00:57:54.636340 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636376 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636382 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636387 | orchestrator | 2026-04-13 00:57:54.636445 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:57:54.636452 | orchestrator | Monday 13 April 2026 00:52:31 +0000 (0:00:00.785) 0:06:03.029 ********** 2026-04-13 00:57:54.636457 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636462 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636467 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636472 | orchestrator | 2026-04-13 00:57:54.636477 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:57:54.636482 | orchestrator | Monday 13 April 2026 00:52:32 +0000 (0:00:00.310) 0:06:03.339 ********** 2026-04-13 00:57:54.636487 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636492 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636497 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636505 | orchestrator | 2026-04-13 00:57:54.636514 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:57:54.636522 | orchestrator | Monday 13 April 2026 00:52:32 +0000 (0:00:00.313) 0:06:03.653 ********** 2026-04-13 00:57:54.636530 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636540 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636549 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636557 | orchestrator | 2026-04-13 00:57:54.636566 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:57:54.636571 | orchestrator | Monday 13 April 2026 00:52:33 +0000 (0:00:01.014) 0:06:04.668 ********** 2026-04-13 00:57:54.636576 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636581 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636586 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636591 | orchestrator | 2026-04-13 00:57:54.636596 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:57:54.636601 | orchestrator | Monday 13 April 2026 00:52:34 +0000 (0:00:00.751) 0:06:05.420 ********** 2026-04-13 00:57:54.636606 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636611 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636616 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636621 | orchestrator | 2026-04-13 00:57:54.636626 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:57:54.636631 | orchestrator | Monday 13 April 2026 00:52:34 +0000 (0:00:00.344) 0:06:05.764 ********** 2026-04-13 00:57:54.636636 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636641 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636646 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636651 | orchestrator | 2026-04-13 00:57:54.636656 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:57:54.636661 | orchestrator | Monday 13 April 2026 00:52:34 +0000 (0:00:00.330) 0:06:06.095 ********** 2026-04-13 00:57:54.636667 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636683 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636692 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636700 | orchestrator | 2026-04-13 00:57:54.636712 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:57:54.636721 | orchestrator | Monday 13 April 2026 00:52:35 +0000 (0:00:00.305) 0:06:06.400 ********** 2026-04-13 00:57:54.636730 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636735 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636772 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636783 | orchestrator | 2026-04-13 00:57:54.636791 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:57:54.636800 | orchestrator | Monday 13 April 2026 00:52:35 +0000 (0:00:00.560) 0:06:06.960 ********** 2026-04-13 00:57:54.636809 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636817 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636826 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636831 | orchestrator | 2026-04-13 00:57:54.636836 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:57:54.636841 | orchestrator | Monday 13 April 2026 00:52:36 +0000 (0:00:00.303) 0:06:07.264 ********** 2026-04-13 00:57:54.636846 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636851 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636856 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636861 | orchestrator | 2026-04-13 00:57:54.636866 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:57:54.636871 | orchestrator | Monday 13 April 2026 00:52:36 +0000 (0:00:00.334) 0:06:07.598 ********** 2026-04-13 00:57:54.636876 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.636881 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.636886 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.636891 | orchestrator | 2026-04-13 00:57:54.636896 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:57:54.636900 | orchestrator | Monday 13 April 2026 00:52:36 +0000 (0:00:00.304) 0:06:07.903 ********** 2026-04-13 00:57:54.636905 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636910 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636915 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636920 | orchestrator | 2026-04-13 00:57:54.636950 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:57:54.636957 | orchestrator | Monday 13 April 2026 00:52:37 +0000 (0:00:00.632) 0:06:08.535 ********** 2026-04-13 00:57:54.636962 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636967 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.636972 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.636977 | orchestrator | 2026-04-13 00:57:54.636982 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:57:54.636987 | orchestrator | Monday 13 April 2026 00:52:37 +0000 (0:00:00.360) 0:06:08.896 ********** 2026-04-13 00:57:54.636992 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.636997 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.637002 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.637006 | orchestrator | 2026-04-13 00:57:54.637011 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-13 00:57:54.637017 | orchestrator | Monday 13 April 2026 00:52:38 +0000 (0:00:00.746) 0:06:09.642 ********** 2026-04-13 00:57:54.637026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:57:54.637034 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:57:54.637043 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:57:54.637051 | orchestrator | 2026-04-13 00:57:54.637060 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-13 00:57:54.637069 | orchestrator | Monday 13 April 2026 00:52:39 +0000 (0:00:01.018) 0:06:10.661 ********** 2026-04-13 00:57:54.637106 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.637112 | orchestrator | 2026-04-13 00:57:54.637119 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-13 00:57:54.637126 | orchestrator | Monday 13 April 2026 00:52:40 +0000 (0:00:00.887) 0:06:11.548 ********** 2026-04-13 00:57:54.637134 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.637141 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.637149 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.637156 | orchestrator | 2026-04-13 00:57:54.637165 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-13 00:57:54.637173 | orchestrator | Monday 13 April 2026 00:52:41 +0000 (0:00:00.787) 0:06:12.336 ********** 2026-04-13 00:57:54.637182 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.637187 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.637191 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.637196 | orchestrator | 2026-04-13 00:57:54.637201 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-13 00:57:54.637205 | orchestrator | Monday 13 April 2026 00:52:41 +0000 (0:00:00.382) 0:06:12.718 ********** 2026-04-13 00:57:54.637210 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:57:54.637215 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:57:54.637219 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:57:54.637224 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-13 00:57:54.637229 | orchestrator | 2026-04-13 00:57:54.637233 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-13 00:57:54.637238 | orchestrator | Monday 13 April 2026 00:52:51 +0000 (0:00:09.655) 0:06:22.373 ********** 2026-04-13 00:57:54.637243 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.637248 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.637255 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.637262 | orchestrator | 2026-04-13 00:57:54.637270 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-13 00:57:54.637278 | orchestrator | Monday 13 April 2026 00:52:51 +0000 (0:00:00.657) 0:06:23.031 ********** 2026-04-13 00:57:54.637286 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-13 00:57:54.637294 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-13 00:57:54.637301 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-13 00:57:54.637305 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-13 00:57:54.637314 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.637319 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.637324 | orchestrator | 2026-04-13 00:57:54.637367 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:57:54.637374 | orchestrator | Monday 13 April 2026 00:52:53 +0000 (0:00:01.989) 0:06:25.021 ********** 2026-04-13 00:57:54.637379 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-13 00:57:54.637384 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-13 00:57:54.637389 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-13 00:57:54.637393 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:57:54.637399 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-13 00:57:54.637408 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-13 00:57:54.637416 | orchestrator | 2026-04-13 00:57:54.637424 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-13 00:57:54.637433 | orchestrator | Monday 13 April 2026 00:52:55 +0000 (0:00:01.319) 0:06:26.341 ********** 2026-04-13 00:57:54.637454 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.637462 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.637469 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.637478 | orchestrator | 2026-04-13 00:57:54.637486 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-13 00:57:54.637500 | orchestrator | Monday 13 April 2026 00:52:55 +0000 (0:00:00.676) 0:06:27.017 ********** 2026-04-13 00:57:54.637509 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.637517 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.637525 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.637534 | orchestrator | 2026-04-13 00:57:54.637542 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-13 00:57:54.637551 | orchestrator | Monday 13 April 2026 00:52:56 +0000 (0:00:00.571) 0:06:27.588 ********** 2026-04-13 00:57:54.637559 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.637567 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.637575 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.637583 | orchestrator | 2026-04-13 00:57:54.637592 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-13 00:57:54.637600 | orchestrator | Monday 13 April 2026 00:52:56 +0000 (0:00:00.336) 0:06:27.925 ********** 2026-04-13 00:57:54.637608 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.637617 | orchestrator | 2026-04-13 00:57:54.637625 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-13 00:57:54.637634 | orchestrator | Monday 13 April 2026 00:52:57 +0000 (0:00:00.534) 0:06:28.459 ********** 2026-04-13 00:57:54.637643 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.637651 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.637660 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.637668 | orchestrator | 2026-04-13 00:57:54.637677 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-13 00:57:54.637684 | orchestrator | Monday 13 April 2026 00:52:57 +0000 (0:00:00.614) 0:06:29.074 ********** 2026-04-13 00:57:54.637693 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.637701 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.637709 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.637716 | orchestrator | 2026-04-13 00:57:54.637725 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-13 00:57:54.637733 | orchestrator | Monday 13 April 2026 00:52:58 +0000 (0:00:00.319) 0:06:29.394 ********** 2026-04-13 00:57:54.637741 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.637750 | orchestrator | 2026-04-13 00:57:54.637758 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-13 00:57:54.637766 | orchestrator | Monday 13 April 2026 00:52:58 +0000 (0:00:00.520) 0:06:29.914 ********** 2026-04-13 00:57:54.637771 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.637776 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.637781 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.637785 | orchestrator | 2026-04-13 00:57:54.637790 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-13 00:57:54.637794 | orchestrator | Monday 13 April 2026 00:53:00 +0000 (0:00:01.521) 0:06:31.436 ********** 2026-04-13 00:57:54.637799 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.637804 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.637809 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.637817 | orchestrator | 2026-04-13 00:57:54.637826 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-13 00:57:54.637833 | orchestrator | Monday 13 April 2026 00:53:01 +0000 (0:00:01.178) 0:06:32.614 ********** 2026-04-13 00:57:54.637841 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.637849 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.637868 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.637875 | orchestrator | 2026-04-13 00:57:54.637880 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-13 00:57:54.637885 | orchestrator | Monday 13 April 2026 00:53:03 +0000 (0:00:01.676) 0:06:34.291 ********** 2026-04-13 00:57:54.637926 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.637931 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.637938 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.637946 | orchestrator | 2026-04-13 00:57:54.637953 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-13 00:57:54.637962 | orchestrator | Monday 13 April 2026 00:53:04 +0000 (0:00:01.820) 0:06:36.112 ********** 2026-04-13 00:57:54.637970 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.637978 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.637983 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-13 00:57:54.637987 | orchestrator | 2026-04-13 00:57:54.637992 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-13 00:57:54.638000 | orchestrator | Monday 13 April 2026 00:53:05 +0000 (0:00:00.721) 0:06:36.833 ********** 2026-04-13 00:57:54.638067 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-13 00:57:54.638095 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-13 00:57:54.638101 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:57:54.638105 | orchestrator | 2026-04-13 00:57:54.638110 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-13 00:57:54.638115 | orchestrator | Monday 13 April 2026 00:53:18 +0000 (0:00:13.127) 0:06:49.960 ********** 2026-04-13 00:57:54.638120 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:57:54.638124 | orchestrator | 2026-04-13 00:57:54.638129 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-13 00:57:54.638134 | orchestrator | Monday 13 April 2026 00:53:19 +0000 (0:00:01.293) 0:06:51.254 ********** 2026-04-13 00:57:54.638138 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.638143 | orchestrator | 2026-04-13 00:57:54.638148 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-13 00:57:54.638152 | orchestrator | Monday 13 April 2026 00:53:20 +0000 (0:00:00.318) 0:06:51.572 ********** 2026-04-13 00:57:54.638157 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.638161 | orchestrator | 2026-04-13 00:57:54.638166 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-13 00:57:54.638171 | orchestrator | Monday 13 April 2026 00:53:20 +0000 (0:00:00.191) 0:06:51.764 ********** 2026-04-13 00:57:54.638176 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-13 00:57:54.638180 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-13 00:57:54.638185 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-13 00:57:54.638205 | orchestrator | 2026-04-13 00:57:54.638210 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-13 00:57:54.638215 | orchestrator | Monday 13 April 2026 00:53:26 +0000 (0:00:05.974) 0:06:57.739 ********** 2026-04-13 00:57:54.638220 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-13 00:57:54.638224 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-13 00:57:54.638229 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-13 00:57:54.638234 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-13 00:57:54.638238 | orchestrator | 2026-04-13 00:57:54.638243 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:57:54.638248 | orchestrator | Monday 13 April 2026 00:53:31 +0000 (0:00:04.891) 0:07:02.630 ********** 2026-04-13 00:57:54.638252 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.638257 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.638262 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.638266 | orchestrator | 2026-04-13 00:57:54.638271 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-13 00:57:54.638283 | orchestrator | Monday 13 April 2026 00:53:32 +0000 (0:00:00.640) 0:07:03.271 ********** 2026-04-13 00:57:54.638287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:54.638292 | orchestrator | 2026-04-13 00:57:54.638297 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-13 00:57:54.638302 | orchestrator | Monday 13 April 2026 00:53:32 +0000 (0:00:00.539) 0:07:03.811 ********** 2026-04-13 00:57:54.638312 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.638317 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.638322 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.638326 | orchestrator | 2026-04-13 00:57:54.638335 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-13 00:57:54.638340 | orchestrator | Monday 13 April 2026 00:53:32 +0000 (0:00:00.349) 0:07:04.160 ********** 2026-04-13 00:57:54.638359 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.638363 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.638368 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.638373 | orchestrator | 2026-04-13 00:57:54.638377 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-13 00:57:54.638382 | orchestrator | Monday 13 April 2026 00:53:34 +0000 (0:00:01.554) 0:07:05.715 ********** 2026-04-13 00:57:54.638387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:57:54.638397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:57:54.638402 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:57:54.638413 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.638417 | orchestrator | 2026-04-13 00:57:54.638422 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-13 00:57:54.638427 | orchestrator | Monday 13 April 2026 00:53:35 +0000 (0:00:00.650) 0:07:06.365 ********** 2026-04-13 00:57:54.638431 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.638436 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.638440 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.638445 | orchestrator | 2026-04-13 00:57:54.638450 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-13 00:57:54.638454 | orchestrator | 2026-04-13 00:57:54.638459 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:57:54.638464 | orchestrator | Monday 13 April 2026 00:53:35 +0000 (0:00:00.640) 0:07:07.006 ********** 2026-04-13 00:57:54.638468 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.638478 | orchestrator | 2026-04-13 00:57:54.638483 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:57:54.638487 | orchestrator | Monday 13 April 2026 00:53:36 +0000 (0:00:00.772) 0:07:07.778 ********** 2026-04-13 00:57:54.638495 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-13 00:57:54.638500 | orchestrator | 2026-04-13 00:57:54.638523 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:57:54.638528 | orchestrator | Monday 13 April 2026 00:53:36 +0000 (0:00:00.460) 0:07:08.238 ********** 2026-04-13 00:57:54.638533 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.638538 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.638542 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.638547 | orchestrator | 2026-04-13 00:57:54.638552 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:57:54.638557 | orchestrator | Monday 13 April 2026 00:53:37 +0000 (0:00:00.456) 0:07:08.695 ********** 2026-04-13 00:57:54.638561 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.638566 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.638571 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.638579 | orchestrator | 2026-04-13 00:57:54.638584 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:57:54.638588 | orchestrator | Monday 13 April 2026 00:53:38 +0000 (0:00:00.677) 0:07:09.373 ********** 2026-04-13 00:57:54.638593 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.638598 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.638602 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.638607 | orchestrator | 2026-04-13 00:57:54.638612 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:57:54.638616 | orchestrator | Monday 13 April 2026 00:53:38 +0000 (0:00:00.716) 0:07:10.089 ********** 2026-04-13 00:57:54.638637 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.638642 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.638647 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.638651 | orchestrator | 2026-04-13 00:57:54.638656 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:57:54.638661 | orchestrator | Monday 13 April 2026 00:53:39 +0000 (0:00:00.627) 0:07:10.716 ********** 2026-04-13 00:57:54.638666 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.638670 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.638675 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.638680 | orchestrator | 2026-04-13 00:57:54.638685 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:57:54.638689 | orchestrator | Monday 13 April 2026 00:53:39 +0000 (0:00:00.424) 0:07:11.140 ********** 2026-04-13 00:57:54.638694 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.638699 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.638703 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.638708 | orchestrator | 2026-04-13 00:57:54.638713 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:57:54.638717 | orchestrator | Monday 13 April 2026 00:53:40 +0000 (0:00:00.276) 0:07:11.417 ********** 2026-04-13 00:57:54.638722 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.638730 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.638738 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.638746 | orchestrator | 2026-04-13 00:57:54.638755 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:57:54.638783 | orchestrator | Monday 13 April 2026 00:53:40 +0000 (0:00:00.271) 0:07:11.689 ********** 2026-04-13 00:57:54.638802 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.638811 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.638829 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.638837 | orchestrator | 2026-04-13 00:57:54.638846 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:57:54.638889 | orchestrator | Monday 13 April 2026 00:53:41 +0000 (0:00:00.728) 0:07:12.417 ********** 2026-04-13 00:57:54.638899 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.638908 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.638917 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.638926 | orchestrator | 2026-04-13 00:57:54.638935 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:57:54.638944 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:01.015) 0:07:13.432 ********** 2026-04-13 00:57:54.638954 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.638963 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.638972 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.638981 | orchestrator | 2026-04-13 00:57:54.638990 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:57:54.638999 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.331) 0:07:13.763 ********** 2026-04-13 00:57:54.639008 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.639018 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.639027 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.639036 | orchestrator | 2026-04-13 00:57:54.639044 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:57:54.639060 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.315) 0:07:14.079 ********** 2026-04-13 00:57:54.639069 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639078 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639086 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639119 | orchestrator | 2026-04-13 00:57:54.639136 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:57:54.639142 | orchestrator | Monday 13 April 2026 00:53:43 +0000 (0:00:00.334) 0:07:14.414 ********** 2026-04-13 00:57:54.639146 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639151 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639156 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639160 | orchestrator | 2026-04-13 00:57:54.639165 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:57:54.639170 | orchestrator | Monday 13 April 2026 00:53:43 +0000 (0:00:00.348) 0:07:14.762 ********** 2026-04-13 00:57:54.639175 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639179 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639184 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639188 | orchestrator | 2026-04-13 00:57:54.639193 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:57:54.639198 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:00.634) 0:07:15.397 ********** 2026-04-13 00:57:54.639202 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.639211 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.639216 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.639221 | orchestrator | 2026-04-13 00:57:54.639226 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:57:54.639234 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:00.312) 0:07:15.710 ********** 2026-04-13 00:57:54.639239 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.639244 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.639249 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.639253 | orchestrator | 2026-04-13 00:57:54.639258 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:57:54.639263 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:00.318) 0:07:16.028 ********** 2026-04-13 00:57:54.639267 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.639272 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.639277 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.639281 | orchestrator | 2026-04-13 00:57:54.639286 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:57:54.639291 | orchestrator | Monday 13 April 2026 00:53:45 +0000 (0:00:00.325) 0:07:16.353 ********** 2026-04-13 00:57:54.639296 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639317 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639322 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639326 | orchestrator | 2026-04-13 00:57:54.639331 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:57:54.639336 | orchestrator | Monday 13 April 2026 00:53:45 +0000 (0:00:00.648) 0:07:17.002 ********** 2026-04-13 00:57:54.639340 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639380 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639385 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639390 | orchestrator | 2026-04-13 00:57:54.639395 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-13 00:57:54.639400 | orchestrator | Monday 13 April 2026 00:53:46 +0000 (0:00:00.504) 0:07:17.506 ********** 2026-04-13 00:57:54.639404 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639409 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639414 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639418 | orchestrator | 2026-04-13 00:57:54.639423 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-13 00:57:54.639428 | orchestrator | Monday 13 April 2026 00:53:46 +0000 (0:00:00.307) 0:07:17.813 ********** 2026-04-13 00:57:54.639441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:57:54.639449 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:57:54.639458 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:57:54.639466 | orchestrator | 2026-04-13 00:57:54.639473 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-13 00:57:54.639478 | orchestrator | Monday 13 April 2026 00:53:47 +0000 (0:00:01.223) 0:07:19.037 ********** 2026-04-13 00:57:54.639483 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.639488 | orchestrator | 2026-04-13 00:57:54.639492 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-13 00:57:54.639497 | orchestrator | Monday 13 April 2026 00:53:48 +0000 (0:00:00.578) 0:07:19.615 ********** 2026-04-13 00:57:54.639502 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.639506 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.639511 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.639515 | orchestrator | 2026-04-13 00:57:54.639520 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-13 00:57:54.639525 | orchestrator | Monday 13 April 2026 00:53:48 +0000 (0:00:00.297) 0:07:19.913 ********** 2026-04-13 00:57:54.639529 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.639534 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.639539 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.639543 | orchestrator | 2026-04-13 00:57:54.639548 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-13 00:57:54.639553 | orchestrator | Monday 13 April 2026 00:53:49 +0000 (0:00:00.619) 0:07:20.533 ********** 2026-04-13 00:57:54.639557 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639562 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639566 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639571 | orchestrator | 2026-04-13 00:57:54.639576 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-13 00:57:54.639580 | orchestrator | Monday 13 April 2026 00:53:49 +0000 (0:00:00.648) 0:07:21.182 ********** 2026-04-13 00:57:54.639585 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.639590 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.639594 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.639599 | orchestrator | 2026-04-13 00:57:54.639604 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-13 00:57:54.639608 | orchestrator | Monday 13 April 2026 00:53:50 +0000 (0:00:00.381) 0:07:21.563 ********** 2026-04-13 00:57:54.639613 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-13 00:57:54.639618 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-13 00:57:54.639622 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-13 00:57:54.639627 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-13 00:57:54.639632 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-13 00:57:54.639636 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-13 00:57:54.639641 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-13 00:57:54.639649 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-13 00:57:54.639654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-13 00:57:54.639663 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-13 00:57:54.639668 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-13 00:57:54.639700 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-13 00:57:54.639707 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-13 00:57:54.639711 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-13 00:57:54.639716 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-13 00:57:54.639721 | orchestrator | 2026-04-13 00:57:54.639725 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-13 00:57:54.639730 | orchestrator | Monday 13 April 2026 00:53:54 +0000 (0:00:04.370) 0:07:25.933 ********** 2026-04-13 00:57:54.639734 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.639739 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.639744 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.639749 | orchestrator | 2026-04-13 00:57:54.639753 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-13 00:57:54.639758 | orchestrator | Monday 13 April 2026 00:53:55 +0000 (0:00:00.593) 0:07:26.527 ********** 2026-04-13 00:57:54.639763 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.639767 | orchestrator | 2026-04-13 00:57:54.639772 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-13 00:57:54.639777 | orchestrator | Monday 13 April 2026 00:53:55 +0000 (0:00:00.538) 0:07:27.065 ********** 2026-04-13 00:57:54.639781 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-13 00:57:54.639786 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-13 00:57:54.639791 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-13 00:57:54.639796 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-13 00:57:54.639800 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-13 00:57:54.639805 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-13 00:57:54.639810 | orchestrator | 2026-04-13 00:57:54.639815 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-13 00:57:54.639819 | orchestrator | Monday 13 April 2026 00:53:56 +0000 (0:00:01.043) 0:07:28.109 ********** 2026-04-13 00:57:54.639824 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.639829 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:57:54.639834 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:57:54.639838 | orchestrator | 2026-04-13 00:57:54.639843 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:57:54.639848 | orchestrator | Monday 13 April 2026 00:53:58 +0000 (0:00:02.079) 0:07:30.188 ********** 2026-04-13 00:57:54.639852 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:57:54.639857 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-13 00:57:54.639862 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.639866 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:57:54.639871 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:57:54.639876 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.639880 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:57:54.639885 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-13 00:57:54.639889 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.639894 | orchestrator | 2026-04-13 00:57:54.639899 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-13 00:57:54.639903 | orchestrator | Monday 13 April 2026 00:54:00 +0000 (0:00:01.503) 0:07:31.692 ********** 2026-04-13 00:57:54.639908 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:57:54.639912 | orchestrator | 2026-04-13 00:57:54.639917 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-13 00:57:54.639925 | orchestrator | Monday 13 April 2026 00:54:02 +0000 (0:00:01.772) 0:07:33.465 ********** 2026-04-13 00:57:54.639929 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.639933 | orchestrator | 2026-04-13 00:57:54.639937 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-13 00:57:54.639942 | orchestrator | Monday 13 April 2026 00:54:02 +0000 (0:00:00.560) 0:07:34.025 ********** 2026-04-13 00:57:54.639946 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-273f60d0-eab1-5837-bb33-0c04c9e5b829', 'data_vg': 'ceph-273f60d0-eab1-5837-bb33-0c04c9e5b829'}) 2026-04-13 00:57:54.639951 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-976187fe-8802-504d-92cd-339995e22605', 'data_vg': 'ceph-976187fe-8802-504d-92cd-339995e22605'}) 2026-04-13 00:57:54.639955 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ae95053f-cfae-50f3-8301-23c2132e6da4', 'data_vg': 'ceph-ae95053f-cfae-50f3-8301-23c2132e6da4'}) 2026-04-13 00:57:54.639959 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f99b2314-ad51-5797-a71e-17207c9800e6', 'data_vg': 'ceph-f99b2314-ad51-5797-a71e-17207c9800e6'}) 2026-04-13 00:57:54.639974 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-204a2e69-8032-57e4-80e8-bdb37f98e657', 'data_vg': 'ceph-204a2e69-8032-57e4-80e8-bdb37f98e657'}) 2026-04-13 00:57:54.639982 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-42f39a41-1a89-55d6-ba76-16e64e7a2b2d', 'data_vg': 'ceph-42f39a41-1a89-55d6-ba76-16e64e7a2b2d'}) 2026-04-13 00:57:54.639986 | orchestrator | 2026-04-13 00:57:54.639990 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-13 00:57:54.639995 | orchestrator | Monday 13 April 2026 00:54:37 +0000 (0:00:34.592) 0:08:08.618 ********** 2026-04-13 00:57:54.639999 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640003 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640007 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640012 | orchestrator | 2026-04-13 00:57:54.640016 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-13 00:57:54.640020 | orchestrator | Monday 13 April 2026 00:54:37 +0000 (0:00:00.635) 0:08:09.253 ********** 2026-04-13 00:57:54.640024 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.640029 | orchestrator | 2026-04-13 00:57:54.640033 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-13 00:57:54.640037 | orchestrator | Monday 13 April 2026 00:54:38 +0000 (0:00:00.534) 0:08:09.787 ********** 2026-04-13 00:57:54.640042 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.640046 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.640050 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.640054 | orchestrator | 2026-04-13 00:57:54.640059 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-13 00:57:54.640063 | orchestrator | Monday 13 April 2026 00:54:39 +0000 (0:00:00.634) 0:08:10.422 ********** 2026-04-13 00:57:54.640067 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.640071 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.640075 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.640080 | orchestrator | 2026-04-13 00:57:54.640084 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-13 00:57:54.640088 | orchestrator | Monday 13 April 2026 00:54:41 +0000 (0:00:01.949) 0:08:12.372 ********** 2026-04-13 00:57:54.640093 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.640097 | orchestrator | 2026-04-13 00:57:54.640101 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-13 00:57:54.640106 | orchestrator | Monday 13 April 2026 00:54:41 +0000 (0:00:00.614) 0:08:12.987 ********** 2026-04-13 00:57:54.640110 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.640118 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.640122 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.640126 | orchestrator | 2026-04-13 00:57:54.640130 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-13 00:57:54.640135 | orchestrator | Monday 13 April 2026 00:54:42 +0000 (0:00:01.253) 0:08:14.241 ********** 2026-04-13 00:57:54.640139 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.640143 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.640148 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.640152 | orchestrator | 2026-04-13 00:57:54.640156 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-13 00:57:54.640160 | orchestrator | Monday 13 April 2026 00:54:44 +0000 (0:00:01.570) 0:08:15.811 ********** 2026-04-13 00:57:54.640165 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.640169 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.640173 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.640177 | orchestrator | 2026-04-13 00:57:54.640182 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-13 00:57:54.640186 | orchestrator | Monday 13 April 2026 00:54:46 +0000 (0:00:01.853) 0:08:17.664 ********** 2026-04-13 00:57:54.640190 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640194 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640199 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640203 | orchestrator | 2026-04-13 00:57:54.640207 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-13 00:57:54.640213 | orchestrator | Monday 13 April 2026 00:54:46 +0000 (0:00:00.361) 0:08:18.026 ********** 2026-04-13 00:57:54.640220 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640226 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640233 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640240 | orchestrator | 2026-04-13 00:57:54.640246 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-13 00:57:54.640253 | orchestrator | Monday 13 April 2026 00:54:47 +0000 (0:00:00.383) 0:08:18.409 ********** 2026-04-13 00:57:54.640259 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-13 00:57:54.640266 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-04-13 00:57:54.640273 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-13 00:57:54.640280 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-13 00:57:54.640287 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 00:57:54.640293 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-13 00:57:54.640300 | orchestrator | 2026-04-13 00:57:54.640306 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-13 00:57:54.640313 | orchestrator | Monday 13 April 2026 00:54:48 +0000 (0:00:01.437) 0:08:19.846 ********** 2026-04-13 00:57:54.640320 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-13 00:57:54.640326 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-13 00:57:54.640333 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-13 00:57:54.640339 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-04-13 00:57:54.640357 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-13 00:57:54.640364 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-13 00:57:54.640370 | orchestrator | 2026-04-13 00:57:54.640376 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-13 00:57:54.640383 | orchestrator | Monday 13 April 2026 00:54:50 +0000 (0:00:02.253) 0:08:22.100 ********** 2026-04-13 00:57:54.640394 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-13 00:57:54.640401 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-13 00:57:54.640408 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-13 00:57:54.640415 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-13 00:57:54.640429 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-13 00:57:54.640436 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-04-13 00:57:54.640448 | orchestrator | 2026-04-13 00:57:54.640455 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-13 00:57:54.640471 | orchestrator | Monday 13 April 2026 00:54:54 +0000 (0:00:03.651) 0:08:25.751 ********** 2026-04-13 00:57:54.640478 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640484 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640490 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:57:54.640497 | orchestrator | 2026-04-13 00:57:54.640503 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-13 00:57:54.640511 | orchestrator | Monday 13 April 2026 00:54:57 +0000 (0:00:02.582) 0:08:28.334 ********** 2026-04-13 00:57:54.640517 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640524 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640531 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-13 00:57:54.640537 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:57:54.640544 | orchestrator | 2026-04-13 00:57:54.640551 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-13 00:57:54.640558 | orchestrator | Monday 13 April 2026 00:55:09 +0000 (0:00:12.832) 0:08:41.166 ********** 2026-04-13 00:57:54.640565 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640572 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640579 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640586 | orchestrator | 2026-04-13 00:57:54.640592 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:57:54.640597 | orchestrator | Monday 13 April 2026 00:55:10 +0000 (0:00:00.931) 0:08:42.098 ********** 2026-04-13 00:57:54.640601 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640605 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640610 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640614 | orchestrator | 2026-04-13 00:57:54.640618 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-13 00:57:54.640622 | orchestrator | Monday 13 April 2026 00:55:11 +0000 (0:00:00.708) 0:08:42.807 ********** 2026-04-13 00:57:54.640627 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.640631 | orchestrator | 2026-04-13 00:57:54.640635 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-13 00:57:54.640639 | orchestrator | Monday 13 April 2026 00:55:12 +0000 (0:00:00.578) 0:08:43.385 ********** 2026-04-13 00:57:54.640643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.640648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.640652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.640656 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640660 | orchestrator | 2026-04-13 00:57:54.640664 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-13 00:57:54.640668 | orchestrator | Monday 13 April 2026 00:55:12 +0000 (0:00:00.423) 0:08:43.809 ********** 2026-04-13 00:57:54.640672 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640676 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640681 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640685 | orchestrator | 2026-04-13 00:57:54.640689 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-13 00:57:54.640693 | orchestrator | Monday 13 April 2026 00:55:13 +0000 (0:00:00.671) 0:08:44.480 ********** 2026-04-13 00:57:54.640697 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640701 | orchestrator | 2026-04-13 00:57:54.640705 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-13 00:57:54.640710 | orchestrator | Monday 13 April 2026 00:55:13 +0000 (0:00:00.241) 0:08:44.721 ********** 2026-04-13 00:57:54.640714 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640722 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640726 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640730 | orchestrator | 2026-04-13 00:57:54.640735 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-13 00:57:54.640739 | orchestrator | Monday 13 April 2026 00:55:13 +0000 (0:00:00.324) 0:08:45.046 ********** 2026-04-13 00:57:54.640743 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640747 | orchestrator | 2026-04-13 00:57:54.640752 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-13 00:57:54.640756 | orchestrator | Monday 13 April 2026 00:55:14 +0000 (0:00:00.237) 0:08:45.284 ********** 2026-04-13 00:57:54.640760 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640764 | orchestrator | 2026-04-13 00:57:54.640768 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-13 00:57:54.640772 | orchestrator | Monday 13 April 2026 00:55:14 +0000 (0:00:00.232) 0:08:45.517 ********** 2026-04-13 00:57:54.640777 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640781 | orchestrator | 2026-04-13 00:57:54.640785 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-13 00:57:54.640791 | orchestrator | Monday 13 April 2026 00:55:14 +0000 (0:00:00.135) 0:08:45.652 ********** 2026-04-13 00:57:54.640797 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640804 | orchestrator | 2026-04-13 00:57:54.640812 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-13 00:57:54.640819 | orchestrator | Monday 13 April 2026 00:55:14 +0000 (0:00:00.238) 0:08:45.891 ********** 2026-04-13 00:57:54.640826 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640833 | orchestrator | 2026-04-13 00:57:54.640841 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-13 00:57:54.640852 | orchestrator | Monday 13 April 2026 00:55:14 +0000 (0:00:00.229) 0:08:46.121 ********** 2026-04-13 00:57:54.640857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.640861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.640870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.640874 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640878 | orchestrator | 2026-04-13 00:57:54.640883 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-13 00:57:54.640887 | orchestrator | Monday 13 April 2026 00:55:15 +0000 (0:00:00.763) 0:08:46.884 ********** 2026-04-13 00:57:54.640891 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640895 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.640899 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.640904 | orchestrator | 2026-04-13 00:57:54.640908 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-13 00:57:54.640912 | orchestrator | Monday 13 April 2026 00:55:16 +0000 (0:00:00.625) 0:08:47.510 ********** 2026-04-13 00:57:54.640916 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640920 | orchestrator | 2026-04-13 00:57:54.640924 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-13 00:57:54.640929 | orchestrator | Monday 13 April 2026 00:55:16 +0000 (0:00:00.287) 0:08:47.797 ********** 2026-04-13 00:57:54.640935 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.640942 | orchestrator | 2026-04-13 00:57:54.640948 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-13 00:57:54.640955 | orchestrator | 2026-04-13 00:57:54.640961 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:57:54.640968 | orchestrator | Monday 13 April 2026 00:55:17 +0000 (0:00:00.663) 0:08:48.461 ********** 2026-04-13 00:57:54.640975 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.640984 | orchestrator | 2026-04-13 00:57:54.640991 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:57:54.641004 | orchestrator | Monday 13 April 2026 00:55:18 +0000 (0:00:01.315) 0:08:49.776 ********** 2026-04-13 00:57:54.641009 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.641013 | orchestrator | 2026-04-13 00:57:54.641018 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:57:54.641022 | orchestrator | Monday 13 April 2026 00:55:19 +0000 (0:00:01.344) 0:08:51.120 ********** 2026-04-13 00:57:54.641026 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641030 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641035 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641039 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641043 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641047 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641051 | orchestrator | 2026-04-13 00:57:54.641055 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:57:54.641060 | orchestrator | Monday 13 April 2026 00:55:20 +0000 (0:00:01.066) 0:08:52.187 ********** 2026-04-13 00:57:54.641064 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641068 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641072 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641076 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641080 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641085 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641089 | orchestrator | 2026-04-13 00:57:54.641093 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:57:54.641097 | orchestrator | Monday 13 April 2026 00:55:22 +0000 (0:00:01.075) 0:08:53.262 ********** 2026-04-13 00:57:54.641101 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641106 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641110 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641114 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641118 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641122 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641126 | orchestrator | 2026-04-13 00:57:54.641130 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:57:54.641135 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:01.419) 0:08:54.682 ********** 2026-04-13 00:57:54.641139 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641143 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641147 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641151 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641155 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641160 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641164 | orchestrator | 2026-04-13 00:57:54.641168 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:57:54.641172 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:01.008) 0:08:55.690 ********** 2026-04-13 00:57:54.641176 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641181 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641185 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641189 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641193 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641197 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641201 | orchestrator | 2026-04-13 00:57:54.641205 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:57:54.641210 | orchestrator | Monday 13 April 2026 00:55:25 +0000 (0:00:01.117) 0:08:56.808 ********** 2026-04-13 00:57:54.641214 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641218 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641222 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641226 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641230 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641237 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641241 | orchestrator | 2026-04-13 00:57:54.641245 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:57:54.641252 | orchestrator | Monday 13 April 2026 00:55:26 +0000 (0:00:00.636) 0:08:57.444 ********** 2026-04-13 00:57:54.641256 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641261 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641265 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641272 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641276 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641280 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641284 | orchestrator | 2026-04-13 00:57:54.641289 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:57:54.641293 | orchestrator | Monday 13 April 2026 00:55:27 +0000 (0:00:00.990) 0:08:58.434 ********** 2026-04-13 00:57:54.641297 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641301 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641305 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641309 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641314 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641318 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641322 | orchestrator | 2026-04-13 00:57:54.641326 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:57:54.641330 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:01.083) 0:08:59.518 ********** 2026-04-13 00:57:54.641334 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641338 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641373 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641378 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641382 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641387 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641391 | orchestrator | 2026-04-13 00:57:54.641395 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:57:54.641399 | orchestrator | Monday 13 April 2026 00:55:29 +0000 (0:00:01.356) 0:09:00.875 ********** 2026-04-13 00:57:54.641404 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641408 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641412 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641416 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641420 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641425 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641429 | orchestrator | 2026-04-13 00:57:54.641433 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:57:54.641437 | orchestrator | Monday 13 April 2026 00:55:30 +0000 (0:00:00.908) 0:09:01.783 ********** 2026-04-13 00:57:54.641442 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641446 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641450 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641454 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641458 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641462 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641467 | orchestrator | 2026-04-13 00:57:54.641471 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:57:54.641475 | orchestrator | Monday 13 April 2026 00:55:31 +0000 (0:00:00.761) 0:09:02.545 ********** 2026-04-13 00:57:54.641479 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641483 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641488 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641492 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641496 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641500 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641504 | orchestrator | 2026-04-13 00:57:54.641509 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:57:54.641513 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:00.972) 0:09:03.518 ********** 2026-04-13 00:57:54.641521 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641525 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641529 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641533 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641538 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641542 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641546 | orchestrator | 2026-04-13 00:57:54.641550 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:57:54.641555 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:00.609) 0:09:04.127 ********** 2026-04-13 00:57:54.641559 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641563 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641567 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641571 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641576 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641580 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641584 | orchestrator | 2026-04-13 00:57:54.641588 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:57:54.641593 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.999) 0:09:05.127 ********** 2026-04-13 00:57:54.641597 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641601 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641605 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641609 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641614 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641618 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641622 | orchestrator | 2026-04-13 00:57:54.641626 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:57:54.641630 | orchestrator | Monday 13 April 2026 00:55:34 +0000 (0:00:00.588) 0:09:05.715 ********** 2026-04-13 00:57:54.641635 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:54.641639 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:54.641643 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:54.641647 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641651 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641655 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641659 | orchestrator | 2026-04-13 00:57:54.641664 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:57:54.641668 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.957) 0:09:06.672 ********** 2026-04-13 00:57:54.641672 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641676 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641680 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641685 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.641689 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.641693 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.641697 | orchestrator | 2026-04-13 00:57:54.641704 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:57:54.641708 | orchestrator | Monday 13 April 2026 00:55:36 +0000 (0:00:00.641) 0:09:07.314 ********** 2026-04-13 00:57:54.641712 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641716 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641724 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641728 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641733 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641737 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641741 | orchestrator | 2026-04-13 00:57:54.641745 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:57:54.641749 | orchestrator | Monday 13 April 2026 00:55:37 +0000 (0:00:01.020) 0:09:08.334 ********** 2026-04-13 00:57:54.641753 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641758 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.641762 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.641766 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.641774 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.641781 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.641788 | orchestrator | 2026-04-13 00:57:54.641794 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-13 00:57:54.641801 | orchestrator | Monday 13 April 2026 00:55:38 +0000 (0:00:01.369) 0:09:09.703 ********** 2026-04-13 00:57:54.641808 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.641815 | orchestrator | 2026-04-13 00:57:54.641821 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-13 00:57:54.641828 | orchestrator | Monday 13 April 2026 00:55:42 +0000 (0:00:04.292) 0:09:13.996 ********** 2026-04-13 00:57:54.641835 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641842 | orchestrator | 2026-04-13 00:57:54.641849 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-13 00:57:54.641856 | orchestrator | Monday 13 April 2026 00:55:44 +0000 (0:00:01.686) 0:09:15.682 ********** 2026-04-13 00:57:54.641863 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.641871 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.641878 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.641884 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.641891 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.641898 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.641905 | orchestrator | 2026-04-13 00:57:54.641912 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-13 00:57:54.641918 | orchestrator | Monday 13 April 2026 00:55:46 +0000 (0:00:01.786) 0:09:17.469 ********** 2026-04-13 00:57:54.641924 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.641928 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.641932 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.641936 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.641943 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.641950 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.641957 | orchestrator | 2026-04-13 00:57:54.641963 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-13 00:57:54.641970 | orchestrator | Monday 13 April 2026 00:55:47 +0000 (0:00:01.030) 0:09:18.499 ********** 2026-04-13 00:57:54.641977 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.641984 | orchestrator | 2026-04-13 00:57:54.641992 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-13 00:57:54.641996 | orchestrator | Monday 13 April 2026 00:55:48 +0000 (0:00:01.340) 0:09:19.840 ********** 2026-04-13 00:57:54.642000 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.642004 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.642008 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.642012 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.642040 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.642044 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.642048 | orchestrator | 2026-04-13 00:57:54.642052 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-13 00:57:54.642055 | orchestrator | Monday 13 April 2026 00:55:50 +0000 (0:00:01.768) 0:09:21.609 ********** 2026-04-13 00:57:54.642059 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.642063 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.642067 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.642071 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.642075 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.642079 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.642083 | orchestrator | 2026-04-13 00:57:54.642086 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-13 00:57:54.642090 | orchestrator | Monday 13 April 2026 00:55:54 +0000 (0:00:03.843) 0:09:25.452 ********** 2026-04-13 00:57:54.642094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.642103 | orchestrator | 2026-04-13 00:57:54.642107 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-13 00:57:54.642111 | orchestrator | Monday 13 April 2026 00:55:55 +0000 (0:00:01.435) 0:09:26.887 ********** 2026-04-13 00:57:54.642115 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.642118 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.642122 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.642126 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642130 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642134 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642137 | orchestrator | 2026-04-13 00:57:54.642141 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-13 00:57:54.642145 | orchestrator | Monday 13 April 2026 00:55:56 +0000 (0:00:00.671) 0:09:27.559 ********** 2026-04-13 00:57:54.642149 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:57:54.642153 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.642157 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.642161 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:57:54.642164 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.642168 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:57:54.642172 | orchestrator | 2026-04-13 00:57:54.642176 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-13 00:57:54.642182 | orchestrator | Monday 13 April 2026 00:55:58 +0000 (0:00:02.687) 0:09:30.247 ********** 2026-04-13 00:57:54.642186 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:54.642190 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:54.642194 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:54.642198 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642205 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642209 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642213 | orchestrator | 2026-04-13 00:57:54.642217 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-13 00:57:54.642220 | orchestrator | 2026-04-13 00:57:54.642224 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:57:54.642228 | orchestrator | Monday 13 April 2026 00:56:00 +0000 (0:00:01.167) 0:09:31.415 ********** 2026-04-13 00:57:54.642232 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.642236 | orchestrator | 2026-04-13 00:57:54.642240 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:57:54.642244 | orchestrator | Monday 13 April 2026 00:56:00 +0000 (0:00:00.526) 0:09:31.941 ********** 2026-04-13 00:57:54.642248 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.642252 | orchestrator | 2026-04-13 00:57:54.642256 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:57:54.642260 | orchestrator | Monday 13 April 2026 00:56:01 +0000 (0:00:00.890) 0:09:32.832 ********** 2026-04-13 00:57:54.642263 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642267 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642271 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642275 | orchestrator | 2026-04-13 00:57:54.642279 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:57:54.642283 | orchestrator | Monday 13 April 2026 00:56:01 +0000 (0:00:00.388) 0:09:33.220 ********** 2026-04-13 00:57:54.642287 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642291 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642295 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642298 | orchestrator | 2026-04-13 00:57:54.642302 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:57:54.642306 | orchestrator | Monday 13 April 2026 00:56:02 +0000 (0:00:00.718) 0:09:33.939 ********** 2026-04-13 00:57:54.642313 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642317 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642321 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642325 | orchestrator | 2026-04-13 00:57:54.642328 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:57:54.642332 | orchestrator | Monday 13 April 2026 00:56:03 +0000 (0:00:00.843) 0:09:34.782 ********** 2026-04-13 00:57:54.642336 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642340 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642354 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642358 | orchestrator | 2026-04-13 00:57:54.642362 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:57:54.642366 | orchestrator | Monday 13 April 2026 00:56:04 +0000 (0:00:01.107) 0:09:35.890 ********** 2026-04-13 00:57:54.642370 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642374 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642378 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642381 | orchestrator | 2026-04-13 00:57:54.642385 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:57:54.642389 | orchestrator | Monday 13 April 2026 00:56:04 +0000 (0:00:00.349) 0:09:36.239 ********** 2026-04-13 00:57:54.642393 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642397 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642401 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642404 | orchestrator | 2026-04-13 00:57:54.642408 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:57:54.642412 | orchestrator | Monday 13 April 2026 00:56:05 +0000 (0:00:00.310) 0:09:36.550 ********** 2026-04-13 00:57:54.642416 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642420 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642424 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642428 | orchestrator | 2026-04-13 00:57:54.642431 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:57:54.642435 | orchestrator | Monday 13 April 2026 00:56:05 +0000 (0:00:00.309) 0:09:36.860 ********** 2026-04-13 00:57:54.642439 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642443 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642447 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642451 | orchestrator | 2026-04-13 00:57:54.642455 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:57:54.642458 | orchestrator | Monday 13 April 2026 00:56:06 +0000 (0:00:01.156) 0:09:38.016 ********** 2026-04-13 00:57:54.642462 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642466 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642470 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642474 | orchestrator | 2026-04-13 00:57:54.642478 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:57:54.642482 | orchestrator | Monday 13 April 2026 00:56:07 +0000 (0:00:00.786) 0:09:38.803 ********** 2026-04-13 00:57:54.642485 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642489 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642493 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642497 | orchestrator | 2026-04-13 00:57:54.642501 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:57:54.642505 | orchestrator | Monday 13 April 2026 00:56:07 +0000 (0:00:00.322) 0:09:39.126 ********** 2026-04-13 00:57:54.642508 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642512 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642516 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642520 | orchestrator | 2026-04-13 00:57:54.642524 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:57:54.642528 | orchestrator | Monday 13 April 2026 00:56:08 +0000 (0:00:00.334) 0:09:39.460 ********** 2026-04-13 00:57:54.642531 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642535 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642544 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642548 | orchestrator | 2026-04-13 00:57:54.642552 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:57:54.642559 | orchestrator | Monday 13 April 2026 00:56:08 +0000 (0:00:00.682) 0:09:40.143 ********** 2026-04-13 00:57:54.642563 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642566 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642570 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642574 | orchestrator | 2026-04-13 00:57:54.642578 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:57:54.642582 | orchestrator | Monday 13 April 2026 00:56:09 +0000 (0:00:00.421) 0:09:40.564 ********** 2026-04-13 00:57:54.642586 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642589 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642593 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642597 | orchestrator | 2026-04-13 00:57:54.642601 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:57:54.642605 | orchestrator | Monday 13 April 2026 00:56:09 +0000 (0:00:00.430) 0:09:40.995 ********** 2026-04-13 00:57:54.642609 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642613 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642616 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642620 | orchestrator | 2026-04-13 00:57:54.642624 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:57:54.642628 | orchestrator | Monday 13 April 2026 00:56:10 +0000 (0:00:00.379) 0:09:41.374 ********** 2026-04-13 00:57:54.642632 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642636 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642639 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642643 | orchestrator | 2026-04-13 00:57:54.642647 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:57:54.642651 | orchestrator | Monday 13 April 2026 00:56:10 +0000 (0:00:00.714) 0:09:42.088 ********** 2026-04-13 00:57:54.642655 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642659 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642662 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642666 | orchestrator | 2026-04-13 00:57:54.642670 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:57:54.642674 | orchestrator | Monday 13 April 2026 00:56:11 +0000 (0:00:00.389) 0:09:42.478 ********** 2026-04-13 00:57:54.642678 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642682 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642686 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642689 | orchestrator | 2026-04-13 00:57:54.642693 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:57:54.642697 | orchestrator | Monday 13 April 2026 00:56:11 +0000 (0:00:00.416) 0:09:42.894 ********** 2026-04-13 00:57:54.642701 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.642705 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.642708 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.642712 | orchestrator | 2026-04-13 00:57:54.642716 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-13 00:57:54.642720 | orchestrator | Monday 13 April 2026 00:56:12 +0000 (0:00:01.009) 0:09:43.903 ********** 2026-04-13 00:57:54.642724 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642728 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642732 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-13 00:57:54.642736 | orchestrator | 2026-04-13 00:57:54.642739 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-13 00:57:54.642743 | orchestrator | Monday 13 April 2026 00:56:13 +0000 (0:00:00.503) 0:09:44.407 ********** 2026-04-13 00:57:54.642747 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:57:54.642751 | orchestrator | 2026-04-13 00:57:54.642755 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-13 00:57:54.642761 | orchestrator | Monday 13 April 2026 00:56:14 +0000 (0:00:01.826) 0:09:46.234 ********** 2026-04-13 00:57:54.642766 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-13 00:57:54.642771 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642775 | orchestrator | 2026-04-13 00:57:54.642779 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-13 00:57:54.642782 | orchestrator | Monday 13 April 2026 00:56:15 +0000 (0:00:00.256) 0:09:46.490 ********** 2026-04-13 00:57:54.642788 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:57:54.642795 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:57:54.642799 | orchestrator | 2026-04-13 00:57:54.642803 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-13 00:57:54.642806 | orchestrator | Monday 13 April 2026 00:56:21 +0000 (0:00:06.152) 0:09:52.643 ********** 2026-04-13 00:57:54.642810 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:57:54.642814 | orchestrator | 2026-04-13 00:57:54.642818 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-13 00:57:54.642822 | orchestrator | Monday 13 April 2026 00:56:24 +0000 (0:00:02.764) 0:09:55.408 ********** 2026-04-13 00:57:54.642827 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.642831 | orchestrator | 2026-04-13 00:57:54.642835 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-13 00:57:54.642841 | orchestrator | Monday 13 April 2026 00:56:25 +0000 (0:00:00.950) 0:09:56.359 ********** 2026-04-13 00:57:54.642845 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-13 00:57:54.642849 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-13 00:57:54.642853 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-13 00:57:54.642857 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-13 00:57:54.642861 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-13 00:57:54.642864 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-13 00:57:54.642868 | orchestrator | 2026-04-13 00:57:54.642872 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-13 00:57:54.642876 | orchestrator | Monday 13 April 2026 00:56:26 +0000 (0:00:01.083) 0:09:57.442 ********** 2026-04-13 00:57:54.642880 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.642884 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:57:54.642888 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:57:54.642891 | orchestrator | 2026-04-13 00:57:54.642895 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:57:54.642899 | orchestrator | Monday 13 April 2026 00:56:27 +0000 (0:00:01.759) 0:09:59.201 ********** 2026-04-13 00:57:54.642903 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:57:54.642907 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:57:54.642911 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.642915 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:57:54.642922 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-13 00:57:54.642926 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.642929 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:57:54.642933 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-13 00:57:54.642937 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.642941 | orchestrator | 2026-04-13 00:57:54.642945 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-13 00:57:54.642949 | orchestrator | Monday 13 April 2026 00:56:29 +0000 (0:00:01.230) 0:10:00.432 ********** 2026-04-13 00:57:54.642953 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.642956 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.642960 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.642964 | orchestrator | 2026-04-13 00:57:54.642968 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-13 00:57:54.642972 | orchestrator | Monday 13 April 2026 00:56:32 +0000 (0:00:03.173) 0:10:03.605 ********** 2026-04-13 00:57:54.642976 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.642979 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.642983 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.642987 | orchestrator | 2026-04-13 00:57:54.642991 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-13 00:57:54.642995 | orchestrator | Monday 13 April 2026 00:56:32 +0000 (0:00:00.388) 0:10:03.994 ********** 2026-04-13 00:57:54.642999 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.643002 | orchestrator | 2026-04-13 00:57:54.643006 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-13 00:57:54.643010 | orchestrator | Monday 13 April 2026 00:56:33 +0000 (0:00:00.536) 0:10:04.530 ********** 2026-04-13 00:57:54.643014 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.643018 | orchestrator | 2026-04-13 00:57:54.643022 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-13 00:57:54.643025 | orchestrator | Monday 13 April 2026 00:56:34 +0000 (0:00:00.962) 0:10:05.493 ********** 2026-04-13 00:57:54.643029 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.643033 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.643037 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.643041 | orchestrator | 2026-04-13 00:57:54.643045 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-13 00:57:54.643049 | orchestrator | Monday 13 April 2026 00:56:35 +0000 (0:00:01.552) 0:10:07.046 ********** 2026-04-13 00:57:54.643052 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.643056 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.643060 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.643064 | orchestrator | 2026-04-13 00:57:54.643068 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-13 00:57:54.643072 | orchestrator | Monday 13 April 2026 00:56:36 +0000 (0:00:01.116) 0:10:08.162 ********** 2026-04-13 00:57:54.643076 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.643079 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.643083 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.643087 | orchestrator | 2026-04-13 00:57:54.643091 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-13 00:57:54.643095 | orchestrator | Monday 13 April 2026 00:56:39 +0000 (0:00:02.178) 0:10:10.340 ********** 2026-04-13 00:57:54.643098 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.643102 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.643106 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.643110 | orchestrator | 2026-04-13 00:57:54.643120 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-13 00:57:54.643124 | orchestrator | Monday 13 April 2026 00:56:41 +0000 (0:00:02.060) 0:10:12.400 ********** 2026-04-13 00:57:54.643132 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643138 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643145 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643152 | orchestrator | 2026-04-13 00:57:54.643164 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:57:54.643172 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:01.668) 0:10:14.069 ********** 2026-04-13 00:57:54.643183 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.643190 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.643197 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.643204 | orchestrator | 2026-04-13 00:57:54.643211 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-13 00:57:54.643218 | orchestrator | Monday 13 April 2026 00:56:43 +0000 (0:00:00.964) 0:10:15.033 ********** 2026-04-13 00:57:54.643224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-04-13 00:57:54.643228 | orchestrator | 2026-04-13 00:57:54.643232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-13 00:57:54.643236 | orchestrator | Monday 13 April 2026 00:56:44 +0000 (0:00:00.763) 0:10:15.796 ********** 2026-04-13 00:57:54.643239 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643243 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643247 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643251 | orchestrator | 2026-04-13 00:57:54.643255 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-13 00:57:54.643258 | orchestrator | Monday 13 April 2026 00:56:45 +0000 (0:00:00.693) 0:10:16.491 ********** 2026-04-13 00:57:54.643262 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.643266 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.643270 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.643274 | orchestrator | 2026-04-13 00:57:54.643277 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-13 00:57:54.643281 | orchestrator | Monday 13 April 2026 00:56:46 +0000 (0:00:01.240) 0:10:17.731 ********** 2026-04-13 00:57:54.643285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.643289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.643293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.643297 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643300 | orchestrator | 2026-04-13 00:57:54.643304 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-13 00:57:54.643308 | orchestrator | Monday 13 April 2026 00:56:47 +0000 (0:00:00.626) 0:10:18.357 ********** 2026-04-13 00:57:54.643312 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643316 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643319 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643323 | orchestrator | 2026-04-13 00:57:54.643327 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-13 00:57:54.643331 | orchestrator | 2026-04-13 00:57:54.643335 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:57:54.643339 | orchestrator | Monday 13 April 2026 00:56:47 +0000 (0:00:00.721) 0:10:19.079 ********** 2026-04-13 00:57:54.643351 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.643357 | orchestrator | 2026-04-13 00:57:54.643361 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:57:54.643365 | orchestrator | Monday 13 April 2026 00:56:49 +0000 (0:00:01.383) 0:10:20.462 ********** 2026-04-13 00:57:54.643369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.643372 | orchestrator | 2026-04-13 00:57:54.643376 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:57:54.643384 | orchestrator | Monday 13 April 2026 00:56:49 +0000 (0:00:00.789) 0:10:21.252 ********** 2026-04-13 00:57:54.643388 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643391 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643395 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643399 | orchestrator | 2026-04-13 00:57:54.643403 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:57:54.643407 | orchestrator | Monday 13 April 2026 00:56:50 +0000 (0:00:00.880) 0:10:22.133 ********** 2026-04-13 00:57:54.643410 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643414 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643418 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643422 | orchestrator | 2026-04-13 00:57:54.643426 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:57:54.643430 | orchestrator | Monday 13 April 2026 00:56:51 +0000 (0:00:00.831) 0:10:22.965 ********** 2026-04-13 00:57:54.643434 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643437 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643441 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643445 | orchestrator | 2026-04-13 00:57:54.643449 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:57:54.643453 | orchestrator | Monday 13 April 2026 00:56:52 +0000 (0:00:00.867) 0:10:23.832 ********** 2026-04-13 00:57:54.643457 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643460 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643464 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643468 | orchestrator | 2026-04-13 00:57:54.643472 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:57:54.643476 | orchestrator | Monday 13 April 2026 00:56:53 +0000 (0:00:00.783) 0:10:24.615 ********** 2026-04-13 00:57:54.643480 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643483 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643487 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643491 | orchestrator | 2026-04-13 00:57:54.643495 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:57:54.643499 | orchestrator | Monday 13 April 2026 00:56:54 +0000 (0:00:01.284) 0:10:25.900 ********** 2026-04-13 00:57:54.643502 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643508 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643514 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643520 | orchestrator | 2026-04-13 00:57:54.643529 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:57:54.643535 | orchestrator | Monday 13 April 2026 00:56:54 +0000 (0:00:00.309) 0:10:26.209 ********** 2026-04-13 00:57:54.643541 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643547 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643555 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643562 | orchestrator | 2026-04-13 00:57:54.643569 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:57:54.643575 | orchestrator | Monday 13 April 2026 00:56:55 +0000 (0:00:00.372) 0:10:26.582 ********** 2026-04-13 00:57:54.643581 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643587 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643593 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643599 | orchestrator | 2026-04-13 00:57:54.643606 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:57:54.643612 | orchestrator | Monday 13 April 2026 00:56:56 +0000 (0:00:00.853) 0:10:27.436 ********** 2026-04-13 00:57:54.643618 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643625 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643631 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643638 | orchestrator | 2026-04-13 00:57:54.643644 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:57:54.643651 | orchestrator | Monday 13 April 2026 00:56:57 +0000 (0:00:01.125) 0:10:28.561 ********** 2026-04-13 00:57:54.643662 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643668 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643674 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643677 | orchestrator | 2026-04-13 00:57:54.643681 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:57:54.643685 | orchestrator | Monday 13 April 2026 00:56:57 +0000 (0:00:00.347) 0:10:28.909 ********** 2026-04-13 00:57:54.643689 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643693 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643696 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643700 | orchestrator | 2026-04-13 00:57:54.643704 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:57:54.643708 | orchestrator | Monday 13 April 2026 00:56:58 +0000 (0:00:00.404) 0:10:29.314 ********** 2026-04-13 00:57:54.643712 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643716 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643719 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643723 | orchestrator | 2026-04-13 00:57:54.643727 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:57:54.643731 | orchestrator | Monday 13 April 2026 00:56:58 +0000 (0:00:00.332) 0:10:29.646 ********** 2026-04-13 00:57:54.643735 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643739 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643742 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643746 | orchestrator | 2026-04-13 00:57:54.643750 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:57:54.643754 | orchestrator | Monday 13 April 2026 00:56:59 +0000 (0:00:00.681) 0:10:30.327 ********** 2026-04-13 00:57:54.643757 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643761 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643765 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643769 | orchestrator | 2026-04-13 00:57:54.643773 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:57:54.643777 | orchestrator | Monday 13 April 2026 00:56:59 +0000 (0:00:00.371) 0:10:30.699 ********** 2026-04-13 00:57:54.643780 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643784 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643788 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643792 | orchestrator | 2026-04-13 00:57:54.643796 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:57:54.643800 | orchestrator | Monday 13 April 2026 00:56:59 +0000 (0:00:00.346) 0:10:31.045 ********** 2026-04-13 00:57:54.643803 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643807 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643811 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643815 | orchestrator | 2026-04-13 00:57:54.643819 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:57:54.643822 | orchestrator | Monday 13 April 2026 00:57:00 +0000 (0:00:00.294) 0:10:31.340 ********** 2026-04-13 00:57:54.643826 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643830 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643834 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643838 | orchestrator | 2026-04-13 00:57:54.643842 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:57:54.643845 | orchestrator | Monday 13 April 2026 00:57:00 +0000 (0:00:00.650) 0:10:31.991 ********** 2026-04-13 00:57:54.643849 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643853 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643857 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643861 | orchestrator | 2026-04-13 00:57:54.643864 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:57:54.643868 | orchestrator | Monday 13 April 2026 00:57:01 +0000 (0:00:00.443) 0:10:32.434 ********** 2026-04-13 00:57:54.643872 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.643876 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.643882 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.643886 | orchestrator | 2026-04-13 00:57:54.643890 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-13 00:57:54.643894 | orchestrator | Monday 13 April 2026 00:57:01 +0000 (0:00:00.734) 0:10:33.168 ********** 2026-04-13 00:57:54.643898 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.643901 | orchestrator | 2026-04-13 00:57:54.643905 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-13 00:57:54.643909 | orchestrator | Monday 13 April 2026 00:57:03 +0000 (0:00:01.216) 0:10:34.384 ********** 2026-04-13 00:57:54.643913 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.643917 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:57:54.643920 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:57:54.643924 | orchestrator | 2026-04-13 00:57:54.643931 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:57:54.643935 | orchestrator | Monday 13 April 2026 00:57:05 +0000 (0:00:01.889) 0:10:36.274 ********** 2026-04-13 00:57:54.643938 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:57:54.643945 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:57:54.643949 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.643953 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:57:54.643957 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-13 00:57:54.643961 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.643965 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:57:54.643969 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-13 00:57:54.643972 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.643976 | orchestrator | 2026-04-13 00:57:54.643980 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-13 00:57:54.643984 | orchestrator | Monday 13 April 2026 00:57:06 +0000 (0:00:01.230) 0:10:37.504 ********** 2026-04-13 00:57:54.643988 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.643992 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.643995 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.643999 | orchestrator | 2026-04-13 00:57:54.644003 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-13 00:57:54.644007 | orchestrator | Monday 13 April 2026 00:57:06 +0000 (0:00:00.312) 0:10:37.817 ********** 2026-04-13 00:57:54.644011 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.644015 | orchestrator | 2026-04-13 00:57:54.644019 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-13 00:57:54.644022 | orchestrator | Monday 13 April 2026 00:57:07 +0000 (0:00:00.976) 0:10:38.793 ********** 2026-04-13 00:57:54.644026 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.644031 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.644035 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.644039 | orchestrator | 2026-04-13 00:57:54.644042 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-13 00:57:54.644046 | orchestrator | Monday 13 April 2026 00:57:08 +0000 (0:00:00.852) 0:10:39.646 ********** 2026-04-13 00:57:54.644050 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.644054 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-13 00:57:54.644067 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.644071 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-13 00:57:54.644075 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.644079 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-13 00:57:54.644086 | orchestrator | 2026-04-13 00:57:54.644092 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-13 00:57:54.644099 | orchestrator | Monday 13 April 2026 00:57:11 +0000 (0:00:03.442) 0:10:43.088 ********** 2026-04-13 00:57:54.644105 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.644111 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:57:54.644117 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.644124 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:57:54.644130 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:57:54.644135 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:57:54.644141 | orchestrator | 2026-04-13 00:57:54.644148 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:57:54.644154 | orchestrator | Monday 13 April 2026 00:57:14 +0000 (0:00:02.225) 0:10:45.314 ********** 2026-04-13 00:57:54.644161 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:57:54.644167 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.644173 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:57:54.644180 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.644187 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:57:54.644193 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.644200 | orchestrator | 2026-04-13 00:57:54.644207 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-13 00:57:54.644214 | orchestrator | Monday 13 April 2026 00:57:15 +0000 (0:00:01.292) 0:10:46.607 ********** 2026-04-13 00:57:54.644221 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-13 00:57:54.644228 | orchestrator | 2026-04-13 00:57:54.644235 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-13 00:57:54.644241 | orchestrator | Monday 13 April 2026 00:57:15 +0000 (0:00:00.241) 0:10:46.848 ********** 2026-04-13 00:57:54.644251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644292 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.644296 | orchestrator | 2026-04-13 00:57:54.644300 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-13 00:57:54.644303 | orchestrator | Monday 13 April 2026 00:57:16 +0000 (0:00:00.846) 0:10:47.695 ********** 2026-04-13 00:57:54.644307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:57:54.644331 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.644334 | orchestrator | 2026-04-13 00:57:54.644338 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-13 00:57:54.644370 | orchestrator | Monday 13 April 2026 00:57:17 +0000 (0:00:00.862) 0:10:48.558 ********** 2026-04-13 00:57:54.644375 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:57:54.644379 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:57:54.644383 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:57:54.644387 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:57:54.644391 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:57:54.644395 | orchestrator | 2026-04-13 00:57:54.644399 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-13 00:57:54.644402 | orchestrator | Monday 13 April 2026 00:57:41 +0000 (0:00:24.260) 0:11:12.818 ********** 2026-04-13 00:57:54.644406 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.644410 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.644414 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.644418 | orchestrator | 2026-04-13 00:57:54.644422 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-13 00:57:54.644425 | orchestrator | Monday 13 April 2026 00:57:42 +0000 (0:00:00.636) 0:11:13.455 ********** 2026-04-13 00:57:54.644429 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.644433 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.644437 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.644441 | orchestrator | 2026-04-13 00:57:54.644445 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-13 00:57:54.644449 | orchestrator | Monday 13 April 2026 00:57:42 +0000 (0:00:00.399) 0:11:13.854 ********** 2026-04-13 00:57:54.644452 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.644456 | orchestrator | 2026-04-13 00:57:54.644460 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-13 00:57:54.644464 | orchestrator | Monday 13 April 2026 00:57:43 +0000 (0:00:00.593) 0:11:14.447 ********** 2026-04-13 00:57:54.644468 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.644472 | orchestrator | 2026-04-13 00:57:54.644476 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-13 00:57:54.644479 | orchestrator | Monday 13 April 2026 00:57:44 +0000 (0:00:00.826) 0:11:15.274 ********** 2026-04-13 00:57:54.644483 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.644487 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.644494 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.644498 | orchestrator | 2026-04-13 00:57:54.644502 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-13 00:57:54.644508 | orchestrator | Monday 13 April 2026 00:57:45 +0000 (0:00:01.364) 0:11:16.639 ********** 2026-04-13 00:57:54.644512 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.644516 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.644520 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.644524 | orchestrator | 2026-04-13 00:57:54.644530 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-13 00:57:54.644535 | orchestrator | Monday 13 April 2026 00:57:46 +0000 (0:00:01.248) 0:11:17.887 ********** 2026-04-13 00:57:54.644539 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:57:54.644542 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:57:54.644546 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:57:54.644550 | orchestrator | 2026-04-13 00:57:54.644554 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-13 00:57:54.644558 | orchestrator | Monday 13 April 2026 00:57:48 +0000 (0:00:02.134) 0:11:20.022 ********** 2026-04-13 00:57:54.644562 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.644566 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.644570 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:57:54.644574 | orchestrator | 2026-04-13 00:57:54.644578 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:57:54.644582 | orchestrator | Monday 13 April 2026 00:57:51 +0000 (0:00:02.385) 0:11:22.408 ********** 2026-04-13 00:57:54.644585 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.644589 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.644593 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.644597 | orchestrator | 2026-04-13 00:57:54.644601 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-13 00:57:54.644605 | orchestrator | Monday 13 April 2026 00:57:51 +0000 (0:00:00.630) 0:11:23.038 ********** 2026-04-13 00:57:54.644609 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:54.644613 | orchestrator | 2026-04-13 00:57:54.644617 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-13 00:57:54.644621 | orchestrator | Monday 13 April 2026 00:57:52 +0000 (0:00:00.507) 0:11:23.546 ********** 2026-04-13 00:57:54.644625 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.644629 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.644633 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.644637 | orchestrator | 2026-04-13 00:57:54.644641 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-13 00:57:54.644644 | orchestrator | Monday 13 April 2026 00:57:52 +0000 (0:00:00.314) 0:11:23.861 ********** 2026-04-13 00:57:54.644648 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.644652 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:54.644656 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:54.644660 | orchestrator | 2026-04-13 00:57:54.644664 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-13 00:57:54.644668 | orchestrator | Monday 13 April 2026 00:57:53 +0000 (0:00:00.484) 0:11:24.346 ********** 2026-04-13 00:57:54.644672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:57:54.644676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:57:54.644679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:57:54.644683 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:54.644687 | orchestrator | 2026-04-13 00:57:54.644691 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-13 00:57:54.644698 | orchestrator | Monday 13 April 2026 00:57:53 +0000 (0:00:00.546) 0:11:24.892 ********** 2026-04-13 00:57:54.644702 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:54.644706 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:54.644709 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:54.644713 | orchestrator | 2026-04-13 00:57:54.644717 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:57:54.644721 | orchestrator | testbed-node-0 : ok=141  changed=35  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-04-13 00:57:54.644726 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-13 00:57:54.644730 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-13 00:57:54.644733 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-04-13 00:57:54.644737 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-13 00:57:54.644741 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-13 00:57:54.644745 | orchestrator | 2026-04-13 00:57:54.644749 | orchestrator | 2026-04-13 00:57:54.644753 | orchestrator | 2026-04-13 00:57:54.644757 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:57:54.644761 | orchestrator | Monday 13 April 2026 00:57:53 +0000 (0:00:00.266) 0:11:25.158 ********** 2026-04-13 00:57:54.644765 | orchestrator | =============================================================================== 2026-04-13 00:57:54.644771 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 77.91s 2026-04-13 00:57:54.644775 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 34.59s 2026-04-13 00:57:54.644781 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 24.26s 2026-04-13 00:57:54.644785 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.55s 2026-04-13 00:57:54.644789 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.13s 2026-04-13 00:57:54.644793 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.83s 2026-04-13 00:57:54.644797 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.91s 2026-04-13 00:57:54.644801 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.66s 2026-04-13 00:57:54.644805 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.99s 2026-04-13 00:57:54.644808 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.40s 2026-04-13 00:57:54.644812 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.15s 2026-04-13 00:57:54.644816 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 5.97s 2026-04-13 00:57:54.644820 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.89s 2026-04-13 00:57:54.644824 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.37s 2026-04-13 00:57:54.644828 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.29s 2026-04-13 00:57:54.644832 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.84s 2026-04-13 00:57:54.644836 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.65s 2026-04-13 00:57:54.644840 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.60s 2026-04-13 00:57:54.644844 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.48s 2026-04-13 00:57:54.644851 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.44s 2026-04-13 00:57:54.644855 | orchestrator | 2026-04-13 00:57:54 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:54.644859 | orchestrator | 2026-04-13 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:57.674469 | orchestrator | 2026-04-13 00:57:57 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:57:57.677436 | orchestrator | 2026-04-13 00:57:57 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:57:57.680278 | orchestrator | 2026-04-13 00:57:57 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:57:57.680330 | orchestrator | 2026-04-13 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:00.743101 | orchestrator | 2026-04-13 00:58:00 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:00.743232 | orchestrator | 2026-04-13 00:58:00 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:00.743241 | orchestrator | 2026-04-13 00:58:00 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:00.743246 | orchestrator | 2026-04-13 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:03.782264 | orchestrator | 2026-04-13 00:58:03 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:03.784821 | orchestrator | 2026-04-13 00:58:03 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:03.787740 | orchestrator | 2026-04-13 00:58:03 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:03.788724 | orchestrator | 2026-04-13 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:06.841540 | orchestrator | 2026-04-13 00:58:06 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:06.843199 | orchestrator | 2026-04-13 00:58:06 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:06.844973 | orchestrator | 2026-04-13 00:58:06 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:06.845001 | orchestrator | 2026-04-13 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:09.898305 | orchestrator | 2026-04-13 00:58:09 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:09.901196 | orchestrator | 2026-04-13 00:58:09 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:09.903852 | orchestrator | 2026-04-13 00:58:09 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:09.903995 | orchestrator | 2026-04-13 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:12.973150 | orchestrator | 2026-04-13 00:58:12 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:12.973265 | orchestrator | 2026-04-13 00:58:12 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:12.975920 | orchestrator | 2026-04-13 00:58:12 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:12.976561 | orchestrator | 2026-04-13 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:16.020249 | orchestrator | 2026-04-13 00:58:16 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:16.023062 | orchestrator | 2026-04-13 00:58:16 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:16.025646 | orchestrator | 2026-04-13 00:58:16 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:16.025740 | orchestrator | 2026-04-13 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:19.077868 | orchestrator | 2026-04-13 00:58:19 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:19.079233 | orchestrator | 2026-04-13 00:58:19 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:19.080625 | orchestrator | 2026-04-13 00:58:19 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:19.080695 | orchestrator | 2026-04-13 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:22.129474 | orchestrator | 2026-04-13 00:58:22 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:22.131626 | orchestrator | 2026-04-13 00:58:22 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:22.133964 | orchestrator | 2026-04-13 00:58:22 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:22.134282 | orchestrator | 2026-04-13 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:25.182769 | orchestrator | 2026-04-13 00:58:25 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:25.184802 | orchestrator | 2026-04-13 00:58:25 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:25.187076 | orchestrator | 2026-04-13 00:58:25 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:25.187171 | orchestrator | 2026-04-13 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:28.232408 | orchestrator | 2026-04-13 00:58:28 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:28.233999 | orchestrator | 2026-04-13 00:58:28 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:28.236693 | orchestrator | 2026-04-13 00:58:28 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:28.237831 | orchestrator | 2026-04-13 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:31.295649 | orchestrator | 2026-04-13 00:58:31 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:31.296240 | orchestrator | 2026-04-13 00:58:31 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:31.297138 | orchestrator | 2026-04-13 00:58:31 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:31.297191 | orchestrator | 2026-04-13 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:34.341578 | orchestrator | 2026-04-13 00:58:34 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:34.344207 | orchestrator | 2026-04-13 00:58:34 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:34.346511 | orchestrator | 2026-04-13 00:58:34 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:34.346646 | orchestrator | 2026-04-13 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:37.393646 | orchestrator | 2026-04-13 00:58:37 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:37.396520 | orchestrator | 2026-04-13 00:58:37 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:37.399039 | orchestrator | 2026-04-13 00:58:37 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:37.399082 | orchestrator | 2026-04-13 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:40.441398 | orchestrator | 2026-04-13 00:58:40 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:40.442761 | orchestrator | 2026-04-13 00:58:40 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:40.444244 | orchestrator | 2026-04-13 00:58:40 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:40.444302 | orchestrator | 2026-04-13 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:43.486431 | orchestrator | 2026-04-13 00:58:43 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:43.487709 | orchestrator | 2026-04-13 00:58:43 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:43.490961 | orchestrator | 2026-04-13 00:58:43 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:43.491026 | orchestrator | 2026-04-13 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:46.545460 | orchestrator | 2026-04-13 00:58:46 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:46.547225 | orchestrator | 2026-04-13 00:58:46 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:46.550296 | orchestrator | 2026-04-13 00:58:46 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:46.550410 | orchestrator | 2026-04-13 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:49.614835 | orchestrator | 2026-04-13 00:58:49 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:49.617681 | orchestrator | 2026-04-13 00:58:49 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:49.620256 | orchestrator | 2026-04-13 00:58:49 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:49.620578 | orchestrator | 2026-04-13 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:52.685472 | orchestrator | 2026-04-13 00:58:52 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:52.688420 | orchestrator | 2026-04-13 00:58:52 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:52.690814 | orchestrator | 2026-04-13 00:58:52 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:52.690930 | orchestrator | 2026-04-13 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:55.736191 | orchestrator | 2026-04-13 00:58:55 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:55.739111 | orchestrator | 2026-04-13 00:58:55 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:55.741709 | orchestrator | 2026-04-13 00:58:55 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:55.741775 | orchestrator | 2026-04-13 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:58.795045 | orchestrator | 2026-04-13 00:58:58 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:58:58.796722 | orchestrator | 2026-04-13 00:58:58 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:58:58.799387 | orchestrator | 2026-04-13 00:58:58 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:58:58.799782 | orchestrator | 2026-04-13 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:01.841641 | orchestrator | 2026-04-13 00:59:01 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:01.843160 | orchestrator | 2026-04-13 00:59:01 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:59:01.848119 | orchestrator | 2026-04-13 00:59:01 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:59:01.848181 | orchestrator | 2026-04-13 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:04.898920 | orchestrator | 2026-04-13 00:59:04 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:04.901340 | orchestrator | 2026-04-13 00:59:04 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:59:04.902943 | orchestrator | 2026-04-13 00:59:04 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:59:04.903008 | orchestrator | 2026-04-13 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:07.955795 | orchestrator | 2026-04-13 00:59:07 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:07.955908 | orchestrator | 2026-04-13 00:59:07 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:59:07.958148 | orchestrator | 2026-04-13 00:59:07 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:59:07.958408 | orchestrator | 2026-04-13 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:11.000100 | orchestrator | 2026-04-13 00:59:10 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:11.011446 | orchestrator | 2026-04-13 00:59:11 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:59:11.014374 | orchestrator | 2026-04-13 00:59:11 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:59:11.014418 | orchestrator | 2026-04-13 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:14.063093 | orchestrator | 2026-04-13 00:59:14 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:14.064051 | orchestrator | 2026-04-13 00:59:14 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:59:14.069823 | orchestrator | 2026-04-13 00:59:14 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:59:14.069892 | orchestrator | 2026-04-13 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:17.118661 | orchestrator | 2026-04-13 00:59:17 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:17.118828 | orchestrator | 2026-04-13 00:59:17 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:59:17.122067 | orchestrator | 2026-04-13 00:59:17 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:59:17.122135 | orchestrator | 2026-04-13 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:20.169086 | orchestrator | 2026-04-13 00:59:20 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:20.170638 | orchestrator | 2026-04-13 00:59:20 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state STARTED 2026-04-13 00:59:20.172051 | orchestrator | 2026-04-13 00:59:20 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state STARTED 2026-04-13 00:59:20.172101 | orchestrator | 2026-04-13 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:23.222725 | orchestrator | 2026-04-13 00:59:23 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:23.225041 | orchestrator | 2026-04-13 00:59:23 | INFO  | Task dd115439-f40d-423e-9878-a7078193fb3e is in state SUCCESS 2026-04-13 00:59:23.225099 | orchestrator | 2026-04-13 00:59:23.227318 | orchestrator | 2026-04-13 00:59:23.227416 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-13 00:59:23.227446 | orchestrator | 2026-04-13 00:59:23.227467 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-13 00:59:23.227480 | orchestrator | Monday 13 April 2026 00:56:21 +0000 (0:00:00.151) 0:00:00.151 ********** 2026-04-13 00:59:23.227492 | orchestrator | ok: [localhost] => { 2026-04-13 00:59:23.227505 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-13 00:59:23.227517 | orchestrator | } 2026-04-13 00:59:23.227529 | orchestrator | 2026-04-13 00:59:23.227540 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-13 00:59:23.227552 | orchestrator | Monday 13 April 2026 00:56:21 +0000 (0:00:00.064) 0:00:00.215 ********** 2026-04-13 00:59:23.227564 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-13 00:59:23.227576 | orchestrator | ...ignoring 2026-04-13 00:59:23.227596 | orchestrator | 2026-04-13 00:59:23.227616 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-13 00:59:23.227635 | orchestrator | Monday 13 April 2026 00:56:24 +0000 (0:00:03.015) 0:00:03.231 ********** 2026-04-13 00:59:23.227654 | orchestrator | skipping: [localhost] 2026-04-13 00:59:23.227672 | orchestrator | 2026-04-13 00:59:23.227691 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-13 00:59:23.227710 | orchestrator | Monday 13 April 2026 00:56:24 +0000 (0:00:00.064) 0:00:03.295 ********** 2026-04-13 00:59:23.227728 | orchestrator | ok: [localhost] 2026-04-13 00:59:23.227746 | orchestrator | 2026-04-13 00:59:23.227766 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:59:23.227785 | orchestrator | 2026-04-13 00:59:23.227804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:59:23.227825 | orchestrator | Monday 13 April 2026 00:56:24 +0000 (0:00:00.218) 0:00:03.514 ********** 2026-04-13 00:59:23.227843 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.227861 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.227874 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.227887 | orchestrator | 2026-04-13 00:59:23.227899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:59:23.227928 | orchestrator | Monday 13 April 2026 00:56:25 +0000 (0:00:00.406) 0:00:03.921 ********** 2026-04-13 00:59:23.227942 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-13 00:59:23.227955 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-13 00:59:23.227968 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-13 00:59:23.227980 | orchestrator | 2026-04-13 00:59:23.227991 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-13 00:59:23.228003 | orchestrator | 2026-04-13 00:59:23.228014 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-13 00:59:23.228026 | orchestrator | Monday 13 April 2026 00:56:25 +0000 (0:00:00.512) 0:00:04.433 ********** 2026-04-13 00:59:23.228037 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:59:23.228049 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-13 00:59:23.228060 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-13 00:59:23.228072 | orchestrator | 2026-04-13 00:59:23.228088 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:59:23.228107 | orchestrator | Monday 13 April 2026 00:56:26 +0000 (0:00:00.433) 0:00:04.867 ********** 2026-04-13 00:59:23.228159 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:59:23.228179 | orchestrator | 2026-04-13 00:59:23.228191 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-13 00:59:23.228202 | orchestrator | Monday 13 April 2026 00:56:26 +0000 (0:00:00.659) 0:00:05.527 ********** 2026-04-13 00:59:23.228556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.228608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.228642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.228656 | orchestrator | 2026-04-13 00:59:23.228899 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-13 00:59:23.229607 | orchestrator | Monday 13 April 2026 00:56:30 +0000 (0:00:03.772) 0:00:09.299 ********** 2026-04-13 00:59:23.229631 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.229644 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.229655 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.229667 | orchestrator | 2026-04-13 00:59:23.229678 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-13 00:59:23.229696 | orchestrator | Monday 13 April 2026 00:56:31 +0000 (0:00:00.560) 0:00:09.860 ********** 2026-04-13 00:59:23.229714 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.229733 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.229751 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.229769 | orchestrator | 2026-04-13 00:59:23.229786 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-13 00:59:23.229803 | orchestrator | Monday 13 April 2026 00:56:32 +0000 (0:00:01.533) 0:00:11.393 ********** 2026-04-13 00:59:23.229838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.230100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.230148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.230182 | orchestrator | 2026-04-13 00:59:23.230196 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-13 00:59:23.230210 | orchestrator | Monday 13 April 2026 00:56:37 +0000 (0:00:04.286) 0:00:15.680 ********** 2026-04-13 00:59:23.230223 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.230236 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.230249 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.230261 | orchestrator | 2026-04-13 00:59:23.230463 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-13 00:59:23.230496 | orchestrator | Monday 13 April 2026 00:56:38 +0000 (0:00:01.126) 0:00:16.806 ********** 2026-04-13 00:59:23.230518 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.230538 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.230557 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.230576 | orchestrator | 2026-04-13 00:59:23.230589 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:59:23.230609 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:03.951) 0:00:20.758 ********** 2026-04-13 00:59:23.230628 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:59:23.230642 | orchestrator | 2026-04-13 00:59:23.230654 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-13 00:59:23.230665 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:00.767) 0:00:21.525 ********** 2026-04-13 00:59:23.230743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.230762 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.230874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.230900 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.230946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.230960 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.230971 | orchestrator | 2026-04-13 00:59:23.230988 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-13 00:59:23.230999 | orchestrator | Monday 13 April 2026 00:56:46 +0000 (0:00:03.940) 0:00:25.466 ********** 2026-04-13 00:59:23.231016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.231047 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.231104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.231127 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.231155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.231185 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.231202 | orchestrator | 2026-04-13 00:59:23.231220 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-13 00:59:23.231236 | orchestrator | Monday 13 April 2026 00:56:49 +0000 (0:00:02.827) 0:00:28.294 ********** 2026-04-13 00:59:23.231251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.231263 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.231314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.231337 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.231347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:59:23.231361 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.231377 | orchestrator | 2026-04-13 00:59:23.231397 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-13 00:59:23.231421 | orchestrator | Monday 13 April 2026 00:56:53 +0000 (0:00:03.352) 0:00:31.647 ********** 2026-04-13 00:59:23.231511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.231610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.231645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:59:23.231670 | orchestrator | 2026-04-13 00:59:23.231681 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-13 00:59:23.231691 | orchestrator | Monday 13 April 2026 00:56:57 +0000 (0:00:03.994) 0:00:35.641 ********** 2026-04-13 00:59:23.231704 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.231720 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.231731 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.231742 | orchestrator | 2026-04-13 00:59:23.231752 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-13 00:59:23.231769 | orchestrator | Monday 13 April 2026 00:56:58 +0000 (0:00:01.015) 0:00:36.657 ********** 2026-04-13 00:59:23.231780 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.231791 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.231807 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.231819 | orchestrator | 2026-04-13 00:59:23.231835 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-13 00:59:23.231846 | orchestrator | Monday 13 April 2026 00:56:58 +0000 (0:00:00.432) 0:00:37.089 ********** 2026-04-13 00:59:23.231856 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.231866 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.231876 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.231886 | orchestrator | 2026-04-13 00:59:23.231896 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-13 00:59:23.231906 | orchestrator | Monday 13 April 2026 00:56:59 +0000 (0:00:00.548) 0:00:37.638 ********** 2026-04-13 00:59:23.231918 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-13 00:59:23.231928 | orchestrator | ...ignoring 2026-04-13 00:59:23.231939 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-13 00:59:23.231950 | orchestrator | ...ignoring 2026-04-13 00:59:23.231960 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-13 00:59:23.231970 | orchestrator | ...ignoring 2026-04-13 00:59:23.231980 | orchestrator | 2026-04-13 00:59:23.231990 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-13 00:59:23.232033 | orchestrator | Monday 13 April 2026 00:57:10 +0000 (0:00:11.373) 0:00:49.011 ********** 2026-04-13 00:59:23.232044 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.232054 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.232064 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.232074 | orchestrator | 2026-04-13 00:59:23.232084 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-13 00:59:23.232124 | orchestrator | Monday 13 April 2026 00:57:10 +0000 (0:00:00.459) 0:00:49.471 ********** 2026-04-13 00:59:23.232135 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.232145 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.232155 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.232165 | orchestrator | 2026-04-13 00:59:23.232175 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-13 00:59:23.232193 | orchestrator | Monday 13 April 2026 00:57:11 +0000 (0:00:00.477) 0:00:49.948 ********** 2026-04-13 00:59:23.232203 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.232213 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.232223 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.232233 | orchestrator | 2026-04-13 00:59:23.232244 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-13 00:59:23.232254 | orchestrator | Monday 13 April 2026 00:57:11 +0000 (0:00:00.469) 0:00:50.417 ********** 2026-04-13 00:59:23.232264 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.232274 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.232305 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.232324 | orchestrator | 2026-04-13 00:59:23.232341 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-13 00:59:23.232358 | orchestrator | Monday 13 April 2026 00:57:12 +0000 (0:00:00.784) 0:00:51.202 ********** 2026-04-13 00:59:23.232372 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.232390 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.232410 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.232425 | orchestrator | 2026-04-13 00:59:23.232442 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-13 00:59:23.232460 | orchestrator | Monday 13 April 2026 00:57:13 +0000 (0:00:00.505) 0:00:51.707 ********** 2026-04-13 00:59:23.232487 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.232501 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.232518 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.232528 | orchestrator | 2026-04-13 00:59:23.232539 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:59:23.232549 | orchestrator | Monday 13 April 2026 00:57:13 +0000 (0:00:00.481) 0:00:52.188 ********** 2026-04-13 00:59:23.232558 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.232569 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.232579 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-13 00:59:23.232589 | orchestrator | 2026-04-13 00:59:23.232599 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-13 00:59:23.232610 | orchestrator | Monday 13 April 2026 00:57:14 +0000 (0:00:00.386) 0:00:52.575 ********** 2026-04-13 00:59:23.232620 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.232630 | orchestrator | 2026-04-13 00:59:23.232640 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-13 00:59:23.232650 | orchestrator | Monday 13 April 2026 00:57:24 +0000 (0:00:10.764) 0:01:03.339 ********** 2026-04-13 00:59:23.232660 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.232670 | orchestrator | 2026-04-13 00:59:23.232680 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:59:23.232690 | orchestrator | Monday 13 April 2026 00:57:25 +0000 (0:00:00.322) 0:01:03.662 ********** 2026-04-13 00:59:23.232700 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.232710 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.232720 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.232730 | orchestrator | 2026-04-13 00:59:23.232740 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-13 00:59:23.232750 | orchestrator | Monday 13 April 2026 00:57:25 +0000 (0:00:00.828) 0:01:04.490 ********** 2026-04-13 00:59:23.232760 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.232770 | orchestrator | 2026-04-13 00:59:23.232781 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-13 00:59:23.232791 | orchestrator | Monday 13 April 2026 00:57:33 +0000 (0:00:07.907) 0:01:12.398 ********** 2026-04-13 00:59:23.232801 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.232811 | orchestrator | 2026-04-13 00:59:23.232821 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-13 00:59:23.232838 | orchestrator | Monday 13 April 2026 00:57:35 +0000 (0:00:01.667) 0:01:14.065 ********** 2026-04-13 00:59:23.232857 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.232867 | orchestrator | 2026-04-13 00:59:23.232877 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-13 00:59:23.232888 | orchestrator | Monday 13 April 2026 00:57:38 +0000 (0:00:02.811) 0:01:16.877 ********** 2026-04-13 00:59:23.232898 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.232908 | orchestrator | 2026-04-13 00:59:23.232918 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-13 00:59:23.232929 | orchestrator | Monday 13 April 2026 00:57:38 +0000 (0:00:00.136) 0:01:17.014 ********** 2026-04-13 00:59:23.232939 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.232949 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.232959 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.232970 | orchestrator | 2026-04-13 00:59:23.232980 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-13 00:59:23.232990 | orchestrator | Monday 13 April 2026 00:57:38 +0000 (0:00:00.348) 0:01:17.362 ********** 2026-04-13 00:59:23.233000 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.233010 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.233020 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.233031 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-13 00:59:23.233041 | orchestrator | 2026-04-13 00:59:23.233051 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-13 00:59:23.233061 | orchestrator | skipping: no hosts matched 2026-04-13 00:59:23.233071 | orchestrator | 2026-04-13 00:59:23.233081 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-13 00:59:23.233091 | orchestrator | 2026-04-13 00:59:23.233102 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-13 00:59:23.233116 | orchestrator | Monday 13 April 2026 00:57:39 +0000 (0:00:00.329) 0:01:17.691 ********** 2026-04-13 00:59:23.233130 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.233140 | orchestrator | 2026-04-13 00:59:23.233150 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-13 00:59:23.233161 | orchestrator | Monday 13 April 2026 00:57:59 +0000 (0:00:20.647) 0:01:38.339 ********** 2026-04-13 00:59:23.233171 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.233181 | orchestrator | 2026-04-13 00:59:23.233192 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-13 00:59:23.233209 | orchestrator | Monday 13 April 2026 00:58:10 +0000 (0:00:10.705) 0:01:49.045 ********** 2026-04-13 00:59:23.233226 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.233248 | orchestrator | 2026-04-13 00:59:23.233271 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-13 00:59:23.233316 | orchestrator | 2026-04-13 00:59:23.233333 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-13 00:59:23.233349 | orchestrator | Monday 13 April 2026 00:58:13 +0000 (0:00:02.719) 0:01:51.765 ********** 2026-04-13 00:59:23.233365 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.233381 | orchestrator | 2026-04-13 00:59:23.233397 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-13 00:59:23.233413 | orchestrator | Monday 13 April 2026 00:58:31 +0000 (0:00:18.254) 0:02:10.019 ********** 2026-04-13 00:59:23.233430 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.233445 | orchestrator | 2026-04-13 00:59:23.233460 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-13 00:59:23.233477 | orchestrator | Monday 13 April 2026 00:58:47 +0000 (0:00:16.067) 0:02:26.087 ********** 2026-04-13 00:59:23.233493 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.233509 | orchestrator | 2026-04-13 00:59:23.233526 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-13 00:59:23.233542 | orchestrator | 2026-04-13 00:59:23.233570 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-13 00:59:23.233603 | orchestrator | Monday 13 April 2026 00:58:49 +0000 (0:00:02.365) 0:02:28.452 ********** 2026-04-13 00:59:23.233621 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.233639 | orchestrator | 2026-04-13 00:59:23.233656 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-13 00:59:23.233676 | orchestrator | Monday 13 April 2026 00:59:01 +0000 (0:00:11.916) 0:02:40.369 ********** 2026-04-13 00:59:23.233686 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.233696 | orchestrator | 2026-04-13 00:59:23.233706 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-13 00:59:23.233717 | orchestrator | Monday 13 April 2026 00:59:06 +0000 (0:00:04.573) 0:02:44.943 ********** 2026-04-13 00:59:23.233726 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.233737 | orchestrator | 2026-04-13 00:59:23.233747 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-13 00:59:23.233757 | orchestrator | 2026-04-13 00:59:23.233767 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-13 00:59:23.233777 | orchestrator | Monday 13 April 2026 00:59:08 +0000 (0:00:02.480) 0:02:47.424 ********** 2026-04-13 00:59:23.233787 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:59:23.233797 | orchestrator | 2026-04-13 00:59:23.233807 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-13 00:59:23.233817 | orchestrator | Monday 13 April 2026 00:59:09 +0000 (0:00:00.717) 0:02:48.141 ********** 2026-04-13 00:59:23.233827 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.233837 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.233847 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.233857 | orchestrator | 2026-04-13 00:59:23.233867 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-13 00:59:23.233878 | orchestrator | Monday 13 April 2026 00:59:11 +0000 (0:00:02.300) 0:02:50.441 ********** 2026-04-13 00:59:23.233895 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.233911 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.233928 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.233945 | orchestrator | 2026-04-13 00:59:23.233961 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-13 00:59:23.233989 | orchestrator | Monday 13 April 2026 00:59:14 +0000 (0:00:02.215) 0:02:52.657 ********** 2026-04-13 00:59:23.234006 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.234085 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.234096 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.234106 | orchestrator | 2026-04-13 00:59:23.234116 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-13 00:59:23.234127 | orchestrator | Monday 13 April 2026 00:59:16 +0000 (0:00:02.267) 0:02:54.924 ********** 2026-04-13 00:59:23.234136 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.234147 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.234157 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.234166 | orchestrator | 2026-04-13 00:59:23.234176 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-13 00:59:23.234186 | orchestrator | Monday 13 April 2026 00:59:18 +0000 (0:00:02.315) 0:02:57.240 ********** 2026-04-13 00:59:23.234196 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.234207 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.234217 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.234226 | orchestrator | 2026-04-13 00:59:23.234236 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-13 00:59:23.234246 | orchestrator | Monday 13 April 2026 00:59:21 +0000 (0:00:02.925) 0:03:00.165 ********** 2026-04-13 00:59:23.234256 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.234266 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.234276 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.234352 | orchestrator | 2026-04-13 00:59:23.234366 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:59:23.234394 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-13 00:59:23.234418 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-13 00:59:23.234442 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-13 00:59:23.234460 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-13 00:59:23.234478 | orchestrator | 2026-04-13 00:59:23.234496 | orchestrator | 2026-04-13 00:59:23.234514 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:59:23.234524 | orchestrator | Monday 13 April 2026 00:59:21 +0000 (0:00:00.265) 0:03:00.431 ********** 2026-04-13 00:59:23.234534 | orchestrator | =============================================================================== 2026-04-13 00:59:23.234544 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.90s 2026-04-13 00:59:23.234554 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.77s 2026-04-13 00:59:23.234564 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.92s 2026-04-13 00:59:23.234574 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.37s 2026-04-13 00:59:23.234584 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.76s 2026-04-13 00:59:23.234594 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.91s 2026-04-13 00:59:23.234619 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.09s 2026-04-13 00:59:23.234637 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.57s 2026-04-13 00:59:23.234664 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.29s 2026-04-13 00:59:23.234681 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.99s 2026-04-13 00:59:23.234697 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.95s 2026-04-13 00:59:23.234714 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.94s 2026-04-13 00:59:23.234729 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.77s 2026-04-13 00:59:23.234744 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.35s 2026-04-13 00:59:23.234760 | orchestrator | Check MariaDB service --------------------------------------------------- 3.02s 2026-04-13 00:59:23.234774 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.93s 2026-04-13 00:59:23.234788 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.83s 2026-04-13 00:59:23.234802 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.81s 2026-04-13 00:59:23.234816 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.48s 2026-04-13 00:59:23.234830 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.32s 2026-04-13 00:59:23.234844 | orchestrator | 2026-04-13 00:59:23 | INFO  | Task 20435b94-2758-48f2-8845-e16a11ab904c is in state SUCCESS 2026-04-13 00:59:23.234856 | orchestrator | 2026-04-13 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:23.234864 | orchestrator | 2026-04-13 00:59:23.234872 | orchestrator | 2026-04-13 00:59:23.234880 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:59:23.234889 | orchestrator | 2026-04-13 00:59:23.234897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:59:23.234905 | orchestrator | Monday 13 April 2026 00:56:21 +0000 (0:00:00.323) 0:00:00.323 ********** 2026-04-13 00:59:23.234926 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.234935 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:59:23.234949 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:59:23.234957 | orchestrator | 2026-04-13 00:59:23.234966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:59:23.234974 | orchestrator | Monday 13 April 2026 00:56:22 +0000 (0:00:00.309) 0:00:00.632 ********** 2026-04-13 00:59:23.234982 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-13 00:59:23.234991 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-13 00:59:23.234999 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-13 00:59:23.235007 | orchestrator | 2026-04-13 00:59:23.235015 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-13 00:59:23.235023 | orchestrator | 2026-04-13 00:59:23.235031 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-13 00:59:23.235039 | orchestrator | Monday 13 April 2026 00:56:22 +0000 (0:00:00.308) 0:00:00.941 ********** 2026-04-13 00:59:23.235047 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:59:23.235055 | orchestrator | 2026-04-13 00:59:23.235063 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-13 00:59:23.235071 | orchestrator | Monday 13 April 2026 00:56:22 +0000 (0:00:00.604) 0:00:01.546 ********** 2026-04-13 00:59:23.235080 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:59:23.235088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:59:23.235096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:59:23.235104 | orchestrator | 2026-04-13 00:59:23.235112 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-13 00:59:23.235120 | orchestrator | Monday 13 April 2026 00:56:25 +0000 (0:00:02.057) 0:00:03.603 ********** 2026-04-13 00:59:23.235130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235220 | orchestrator | 2026-04-13 00:59:23.235228 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-13 00:59:23.235237 | orchestrator | Monday 13 April 2026 00:56:26 +0000 (0:00:01.531) 0:00:05.135 ********** 2026-04-13 00:59:23.235250 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:59:23.235259 | orchestrator | 2026-04-13 00:59:23.235267 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-13 00:59:23.235276 | orchestrator | Monday 13 April 2026 00:56:27 +0000 (0:00:00.499) 0:00:05.635 ********** 2026-04-13 00:59:23.235317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235415 | orchestrator | 2026-04-13 00:59:23.235430 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-13 00:59:23.235444 | orchestrator | Monday 13 April 2026 00:56:30 +0000 (0:00:03.298) 0:00:08.934 ********** 2026-04-13 00:59:23.235458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:59:23.235481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:59:23.235498 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.235507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:59:23.235521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:59:23.235530 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.235539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:59:23.235554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:59:23.235569 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.235577 | orchestrator | 2026-04-13 00:59:23.235585 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-13 00:59:23.235594 | orchestrator | Monday 13 April 2026 00:56:31 +0000 (0:00:00.695) 0:00:09.629 ********** 2026-04-13 00:59:23.235602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:59:23.235616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:59:23.235625 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.235634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:59:23.235648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:59:23.235663 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.235672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-13 00:59:23.235686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-13 00:59:23.235695 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.235703 | orchestrator | 2026-04-13 00:59:23.235711 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-13 00:59:23.235720 | orchestrator | Monday 13 April 2026 00:56:31 +0000 (0:00:00.922) 0:00:10.552 ********** 2026-04-13 00:59:23.235729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.235827 | orchestrator | 2026-04-13 00:59:23.235836 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-13 00:59:23.235844 | orchestrator | Monday 13 April 2026 00:56:34 +0000 (0:00:02.892) 0:00:13.445 ********** 2026-04-13 00:59:23.235852 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.235861 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.235869 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.235877 | orchestrator | 2026-04-13 00:59:23.235890 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-13 00:59:23.235899 | orchestrator | Monday 13 April 2026 00:56:37 +0000 (0:00:02.838) 0:00:16.283 ********** 2026-04-13 00:59:23.235908 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.235916 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.235924 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.235932 | orchestrator | 2026-04-13 00:59:23.235940 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-13 00:59:23.235948 | orchestrator | Monday 13 April 2026 00:56:39 +0000 (0:00:01.609) 0:00:17.893 ********** 2026-04-13 00:59:23.235957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-13 00:59:23.235988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.236008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.236024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-13 00:59:23.236033 | orchestrator | 2026-04-13 00:59:23.236042 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-13 00:59:23.236050 | orchestrator | Monday 13 April 2026 00:56:41 +0000 (0:00:02.018) 0:00:19.912 ********** 2026-04-13 00:59:23.236059 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.236067 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:59:23.236075 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:59:23.236083 | orchestrator | 2026-04-13 00:59:23.236092 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-13 00:59:23.236100 | orchestrator | Monday 13 April 2026 00:56:41 +0000 (0:00:00.617) 0:00:20.529 ********** 2026-04-13 00:59:23.236108 | orchestrator | 2026-04-13 00:59:23.236116 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-13 00:59:23.236145 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:00.071) 0:00:20.600 ********** 2026-04-13 00:59:23.236156 | orchestrator | 2026-04-13 00:59:23.236164 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-13 00:59:23.236173 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:00.074) 0:00:20.675 ********** 2026-04-13 00:59:23.236181 | orchestrator | 2026-04-13 00:59:23.236189 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-13 00:59:23.236197 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:00.092) 0:00:20.767 ********** 2026-04-13 00:59:23.236206 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.236214 | orchestrator | 2026-04-13 00:59:23.236222 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-13 00:59:23.236231 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:00.278) 0:00:21.046 ********** 2026-04-13 00:59:23.236239 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:59:23.236247 | orchestrator | 2026-04-13 00:59:23.236255 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-13 00:59:23.236263 | orchestrator | Monday 13 April 2026 00:56:42 +0000 (0:00:00.240) 0:00:21.287 ********** 2026-04-13 00:59:23.236272 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.236280 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.236313 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.236323 | orchestrator | 2026-04-13 00:59:23.236331 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-13 00:59:23.236339 | orchestrator | Monday 13 April 2026 00:57:52 +0000 (0:01:09.924) 0:01:31.212 ********** 2026-04-13 00:59:23.236347 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.236356 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:59:23.236365 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:59:23.236378 | orchestrator | 2026-04-13 00:59:23.236387 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-13 00:59:23.236396 | orchestrator | Monday 13 April 2026 00:59:07 +0000 (0:01:14.423) 0:02:45.635 ********** 2026-04-13 00:59:23.236404 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:59:23.236412 | orchestrator | 2026-04-13 00:59:23.236420 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-13 00:59:23.236429 | orchestrator | Monday 13 April 2026 00:59:07 +0000 (0:00:00.679) 0:02:46.315 ********** 2026-04-13 00:59:23.236442 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.236450 | orchestrator | 2026-04-13 00:59:23.236459 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-13 00:59:23.236467 | orchestrator | Monday 13 April 2026 00:59:10 +0000 (0:00:02.411) 0:02:48.727 ********** 2026-04-13 00:59:23.236475 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.236483 | orchestrator | 2026-04-13 00:59:23.236492 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-13 00:59:23.236500 | orchestrator | Monday 13 April 2026 00:59:12 +0000 (0:00:02.144) 0:02:50.872 ********** 2026-04-13 00:59:23.236508 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:59:23.236517 | orchestrator | 2026-04-13 00:59:23.236525 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-13 00:59:23.236533 | orchestrator | Monday 13 April 2026 00:59:14 +0000 (0:00:02.144) 0:02:53.016 ********** 2026-04-13 00:59:23.236541 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.236549 | orchestrator | 2026-04-13 00:59:23.236557 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-13 00:59:23.236566 | orchestrator | Monday 13 April 2026 00:59:17 +0000 (0:00:02.768) 0:02:55.785 ********** 2026-04-13 00:59:23.236574 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:59:23.236582 | orchestrator | 2026-04-13 00:59:23.236590 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:59:23.236599 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 00:59:23.236699 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:59:23.236708 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:59:23.236717 | orchestrator | 2026-04-13 00:59:23.236725 | orchestrator | 2026-04-13 00:59:23.236733 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:59:23.236742 | orchestrator | Monday 13 April 2026 00:59:19 +0000 (0:00:02.663) 0:02:58.448 ********** 2026-04-13 00:59:23.236754 | orchestrator | =============================================================================== 2026-04-13 00:59:23.236763 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 74.42s 2026-04-13 00:59:23.236771 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.92s 2026-04-13 00:59:23.236779 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.30s 2026-04-13 00:59:23.236788 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.89s 2026-04-13 00:59:23.236796 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.84s 2026-04-13 00:59:23.236804 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.77s 2026-04-13 00:59:23.236812 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.66s 2026-04-13 00:59:23.236820 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.41s 2026-04-13 00:59:23.236828 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.14s 2026-04-13 00:59:23.236836 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.14s 2026-04-13 00:59:23.236845 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.06s 2026-04-13 00:59:23.236853 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.02s 2026-04-13 00:59:23.236861 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.61s 2026-04-13 00:59:23.236869 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.53s 2026-04-13 00:59:23.236877 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.92s 2026-04-13 00:59:23.236885 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.70s 2026-04-13 00:59:23.236894 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-04-13 00:59:23.236902 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-04-13 00:59:23.236910 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2026-04-13 00:59:23.236918 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-13 00:59:26.281387 | orchestrator | 2026-04-13 00:59:26 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:26.281489 | orchestrator | 2026-04-13 00:59:26 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:26.283869 | orchestrator | 2026-04-13 00:59:26 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:26.283935 | orchestrator | 2026-04-13 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:29.326265 | orchestrator | 2026-04-13 00:59:29 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:29.328447 | orchestrator | 2026-04-13 00:59:29 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:29.330065 | orchestrator | 2026-04-13 00:59:29 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:29.330108 | orchestrator | 2026-04-13 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:32.389669 | orchestrator | 2026-04-13 00:59:32 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:32.390468 | orchestrator | 2026-04-13 00:59:32 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:32.391549 | orchestrator | 2026-04-13 00:59:32 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:32.391589 | orchestrator | 2026-04-13 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:35.441813 | orchestrator | 2026-04-13 00:59:35 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:35.442670 | orchestrator | 2026-04-13 00:59:35 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:35.443469 | orchestrator | 2026-04-13 00:59:35 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:35.443508 | orchestrator | 2026-04-13 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:38.489331 | orchestrator | 2026-04-13 00:59:38 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:38.491904 | orchestrator | 2026-04-13 00:59:38 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:38.493801 | orchestrator | 2026-04-13 00:59:38 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:38.494129 | orchestrator | 2026-04-13 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:41.530707 | orchestrator | 2026-04-13 00:59:41 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:41.532538 | orchestrator | 2026-04-13 00:59:41 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:41.534235 | orchestrator | 2026-04-13 00:59:41 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:41.534354 | orchestrator | 2026-04-13 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:44.570497 | orchestrator | 2026-04-13 00:59:44 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:44.570598 | orchestrator | 2026-04-13 00:59:44 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:44.571701 | orchestrator | 2026-04-13 00:59:44 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:44.571753 | orchestrator | 2026-04-13 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:47.610208 | orchestrator | 2026-04-13 00:59:47 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:47.611062 | orchestrator | 2026-04-13 00:59:47 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:47.613098 | orchestrator | 2026-04-13 00:59:47 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:47.613136 | orchestrator | 2026-04-13 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:50.646693 | orchestrator | 2026-04-13 00:59:50 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:50.647062 | orchestrator | 2026-04-13 00:59:50 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:50.648246 | orchestrator | 2026-04-13 00:59:50 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:50.648340 | orchestrator | 2026-04-13 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:53.678211 | orchestrator | 2026-04-13 00:59:53 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:53.678405 | orchestrator | 2026-04-13 00:59:53 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state STARTED 2026-04-13 00:59:53.679043 | orchestrator | 2026-04-13 00:59:53 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:53.679076 | orchestrator | 2026-04-13 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:56.723619 | orchestrator | 2026-04-13 00:59:56 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:56.731497 | orchestrator | 2026-04-13 00:59:56.731556 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 00:59:56.731563 | orchestrator | 2.16.14 2026-04-13 00:59:56.731568 | orchestrator | 2026-04-13 00:59:56.731573 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-13 00:59:56.731578 | orchestrator | 2026-04-13 00:59:56.731582 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-13 00:59:56.731587 | orchestrator | Monday 13 April 2026 00:57:59 +0000 (0:00:00.612) 0:00:00.612 ********** 2026-04-13 00:59:56.731591 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:59:56.731596 | orchestrator | 2026-04-13 00:59:56.731600 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-13 00:59:56.731604 | orchestrator | Monday 13 April 2026 00:57:59 +0000 (0:00:00.688) 0:00:01.301 ********** 2026-04-13 00:59:56.731608 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731612 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731616 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731620 | orchestrator | 2026-04-13 00:59:56.731624 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-13 00:59:56.731628 | orchestrator | Monday 13 April 2026 00:58:00 +0000 (0:00:01.066) 0:00:02.367 ********** 2026-04-13 00:59:56.731632 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731636 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731640 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731644 | orchestrator | 2026-04-13 00:59:56.731648 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-13 00:59:56.731652 | orchestrator | Monday 13 April 2026 00:58:01 +0000 (0:00:00.399) 0:00:02.767 ********** 2026-04-13 00:59:56.731656 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731660 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731664 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731668 | orchestrator | 2026-04-13 00:59:56.731672 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-13 00:59:56.731676 | orchestrator | Monday 13 April 2026 00:58:02 +0000 (0:00:00.830) 0:00:03.597 ********** 2026-04-13 00:59:56.731680 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731684 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731688 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731692 | orchestrator | 2026-04-13 00:59:56.731696 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-13 00:59:56.731700 | orchestrator | Monday 13 April 2026 00:58:02 +0000 (0:00:00.333) 0:00:03.931 ********** 2026-04-13 00:59:56.731704 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731708 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731712 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731716 | orchestrator | 2026-04-13 00:59:56.731720 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-13 00:59:56.731735 | orchestrator | Monday 13 April 2026 00:58:02 +0000 (0:00:00.297) 0:00:04.229 ********** 2026-04-13 00:59:56.731740 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731744 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731748 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731752 | orchestrator | 2026-04-13 00:59:56.731756 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-13 00:59:56.731774 | orchestrator | Monday 13 April 2026 00:58:03 +0000 (0:00:00.330) 0:00:04.559 ********** 2026-04-13 00:59:56.731779 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.731783 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.731787 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.731792 | orchestrator | 2026-04-13 00:59:56.731796 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-13 00:59:56.731800 | orchestrator | Monday 13 April 2026 00:58:03 +0000 (0:00:00.528) 0:00:05.088 ********** 2026-04-13 00:59:56.731804 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731808 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731812 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731816 | orchestrator | 2026-04-13 00:59:56.731820 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-13 00:59:56.731824 | orchestrator | Monday 13 April 2026 00:58:03 +0000 (0:00:00.302) 0:00:05.390 ********** 2026-04-13 00:59:56.731828 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:59:56.731832 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:59:56.731836 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:59:56.731840 | orchestrator | 2026-04-13 00:59:56.731844 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-13 00:59:56.731848 | orchestrator | Monday 13 April 2026 00:58:04 +0000 (0:00:00.694) 0:00:06.085 ********** 2026-04-13 00:59:56.731852 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.731856 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.731860 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.731864 | orchestrator | 2026-04-13 00:59:56.731868 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-13 00:59:56.731923 | orchestrator | Monday 13 April 2026 00:58:04 +0000 (0:00:00.420) 0:00:06.506 ********** 2026-04-13 00:59:56.731928 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:59:56.731932 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:59:56.731936 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:59:56.731940 | orchestrator | 2026-04-13 00:59:56.731944 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-13 00:59:56.731948 | orchestrator | Monday 13 April 2026 00:58:08 +0000 (0:00:03.114) 0:00:09.621 ********** 2026-04-13 00:59:56.731952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 00:59:56.731957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 00:59:56.731961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 00:59:56.731965 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.731969 | orchestrator | 2026-04-13 00:59:56.731983 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-13 00:59:56.731988 | orchestrator | Monday 13 April 2026 00:58:08 +0000 (0:00:00.429) 0:00:10.050 ********** 2026-04-13 00:59:56.731993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.731999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.732003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.732012 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732016 | orchestrator | 2026-04-13 00:59:56.732020 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-13 00:59:56.732024 | orchestrator | Monday 13 April 2026 00:58:09 +0000 (0:00:00.812) 0:00:10.863 ********** 2026-04-13 00:59:56.732030 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.732039 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.732044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.732048 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732052 | orchestrator | 2026-04-13 00:59:56.732056 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-13 00:59:56.732060 | orchestrator | Monday 13 April 2026 00:58:09 +0000 (0:00:00.173) 0:00:11.036 ********** 2026-04-13 00:59:56.732066 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ffc1ff2912d1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-13 00:58:05.921759', 'end': '2026-04-13 00:58:05.969559', 'delta': '0:00:00.047800', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ffc1ff2912d1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-13 00:59:56.732074 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '683b1a67c00a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-13 00:58:07.019537', 'end': '2026-04-13 00:58:07.063871', 'delta': '0:00:00.044334', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['683b1a67c00a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-13 00:59:56.732082 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b4520027b10e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-13 00:58:07.864773', 'end': '2026-04-13 00:58:07.920729', 'delta': '0:00:00.055956', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4520027b10e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-13 00:59:56.732090 | orchestrator | 2026-04-13 00:59:56.732095 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-13 00:59:56.732099 | orchestrator | Monday 13 April 2026 00:58:09 +0000 (0:00:00.426) 0:00:11.463 ********** 2026-04-13 00:59:56.732104 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.732108 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.732112 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.732117 | orchestrator | 2026-04-13 00:59:56.732121 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-13 00:59:56.732126 | orchestrator | Monday 13 April 2026 00:58:10 +0000 (0:00:00.429) 0:00:11.893 ********** 2026-04-13 00:59:56.732130 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-13 00:59:56.732135 | orchestrator | 2026-04-13 00:59:56.732139 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-13 00:59:56.732144 | orchestrator | Monday 13 April 2026 00:58:11 +0000 (0:00:01.327) 0:00:13.221 ********** 2026-04-13 00:59:56.732148 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732153 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732157 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732162 | orchestrator | 2026-04-13 00:59:56.732166 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-13 00:59:56.732170 | orchestrator | Monday 13 April 2026 00:58:11 +0000 (0:00:00.321) 0:00:13.542 ********** 2026-04-13 00:59:56.732175 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732179 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732184 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732188 | orchestrator | 2026-04-13 00:59:56.732193 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 00:59:56.732200 | orchestrator | Monday 13 April 2026 00:58:12 +0000 (0:00:00.506) 0:00:14.048 ********** 2026-04-13 00:59:56.732205 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732209 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732214 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732218 | orchestrator | 2026-04-13 00:59:56.732222 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-13 00:59:56.732227 | orchestrator | Monday 13 April 2026 00:58:13 +0000 (0:00:00.575) 0:00:14.624 ********** 2026-04-13 00:59:56.732231 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.732236 | orchestrator | 2026-04-13 00:59:56.732240 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-13 00:59:56.732245 | orchestrator | Monday 13 April 2026 00:58:13 +0000 (0:00:00.158) 0:00:14.782 ********** 2026-04-13 00:59:56.732249 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732253 | orchestrator | 2026-04-13 00:59:56.732258 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 00:59:56.732308 | orchestrator | Monday 13 April 2026 00:58:13 +0000 (0:00:00.225) 0:00:15.008 ********** 2026-04-13 00:59:56.732314 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732319 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732323 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732327 | orchestrator | 2026-04-13 00:59:56.732332 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-13 00:59:56.732336 | orchestrator | Monday 13 April 2026 00:58:13 +0000 (0:00:00.310) 0:00:15.318 ********** 2026-04-13 00:59:56.732341 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732345 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732349 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732354 | orchestrator | 2026-04-13 00:59:56.732358 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-13 00:59:56.732693 | orchestrator | Monday 13 April 2026 00:58:14 +0000 (0:00:00.338) 0:00:15.656 ********** 2026-04-13 00:59:56.732708 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732714 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732720 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732726 | orchestrator | 2026-04-13 00:59:56.732731 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-13 00:59:56.732737 | orchestrator | Monday 13 April 2026 00:58:14 +0000 (0:00:00.535) 0:00:16.192 ********** 2026-04-13 00:59:56.732743 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732748 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732753 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732759 | orchestrator | 2026-04-13 00:59:56.732766 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-13 00:59:56.732787 | orchestrator | Monday 13 April 2026 00:58:14 +0000 (0:00:00.314) 0:00:16.506 ********** 2026-04-13 00:59:56.732794 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732800 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732806 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732812 | orchestrator | 2026-04-13 00:59:56.732818 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-13 00:59:56.732826 | orchestrator | Monday 13 April 2026 00:58:15 +0000 (0:00:00.325) 0:00:16.832 ********** 2026-04-13 00:59:56.732830 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732834 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732838 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732867 | orchestrator | 2026-04-13 00:59:56.732871 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-13 00:59:56.732876 | orchestrator | Monday 13 April 2026 00:58:15 +0000 (0:00:00.327) 0:00:17.160 ********** 2026-04-13 00:59:56.732880 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.732884 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.732888 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.732920 | orchestrator | 2026-04-13 00:59:56.732926 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-13 00:59:56.732932 | orchestrator | Monday 13 April 2026 00:58:16 +0000 (0:00:00.506) 0:00:17.667 ********** 2026-04-13 00:59:56.732940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829', 'dm-uuid-LVM-Mr1Q93NeSsnqlaYzlizzQ82P3R69N73YnF4wV9m7xmyazb6rsYJT7xb0zocD08yt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.732950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6', 'dm-uuid-LVM-Zv4PurkWYeoDs9KB6u8YAxs5qYmjOzJ7edNlLzVRvDP617MCxld659gQGqVso69K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.732963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.732979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.732998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605', 'dm-uuid-LVM-tRfeWyEsbCcRzYaI0KmmkGukknCbNfxEirUZgI6deh8waBk2mMICIOw8e11sjiBA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vjQyqS-0O2i-oUfi-QrIp-EEvb-mkza-Ay8B4d', 'scsi-0QEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77', 'scsi-SQEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657', 'dm-uuid-LVM-BgltwyKEc1hQK7TJ3EvhVOEE61h7GR8jqzNvFt9z9mBySS0of86UAOJIH8eRQC1B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8A021o-0SEM-qE3F-L4Wz-tepU-5Ebc-2TkWkY', 'scsi-0QEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f', 'scsi-SQEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05', 'scsi-SQEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733148 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.733152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4', 'dm-uuid-LVM-wGY9KIRhm7IaVKgPekBld64Nsr4cXFHYTnMbF7axTSTNUFWMfy3NmO8CcXI9BhjY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d', 'dm-uuid-LVM-GnYlbSDmKKf8kqe05EYZgvzXvfiTNv27Pd4xX5u2Umcq5s1KRyrmBZw287rcJfR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wPL5U6-PwRf-m1u5-PNtC-WxG6-QRHR-4sCXGb', 'scsi-0QEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605', 'scsi-SQEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XtfceA-Mbkv-edmG-nfsU-T6X9-jaeN-0URiWL', 'scsi-0QEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194', 'scsi-SQEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e', 'scsi-SQEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'vir2026-04-13 00:59:56 | INFO  | Task e5ba770f-23c3-46dc-8ca8-4cd256f64579 is in state SUCCESS 2026-04-13 00:59:56.733231 | orchestrator | tual': 1}})  2026-04-13 00:59:56.733236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733247 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.733251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:59:56.733303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IkZIFC-QO06-F9OK-5MzU-4gck-L7wj-os076W', 'scsi-0QEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a', 'scsi-SQEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g1Hjx0-VEhr-pSSU-0d55-M01s-wOvL-5jZgev', 'scsi-0QEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7', 'scsi-SQEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923', 'scsi-SQEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:59:56.733337 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.733341 | orchestrator | 2026-04-13 00:59:56.733345 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-13 00:59:56.733349 | orchestrator | Monday 13 April 2026 00:58:16 +0000 (0:00:00.583) 0:00:18.250 ********** 2026-04-13 00:59:56.733354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829', 'dm-uuid-LVM-Mr1Q93NeSsnqlaYzlizzQ82P3R69N73YnF4wV9m7xmyazb6rsYJT7xb0zocD08yt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6', 'dm-uuid-LVM-Zv4PurkWYeoDs9KB6u8YAxs5qYmjOzJ7edNlLzVRvDP617MCxld659gQGqVso69K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733405 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733421 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605', 'dm-uuid-LVM-tRfeWyEsbCcRzYaI0KmmkGukknCbNfxEirUZgI6deh8waBk2mMICIOw8e11sjiBA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_194ebb96-0dba-4c24-aa8b-2b193008c6b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657', 'dm-uuid-LVM-BgltwyKEc1hQK7TJ3EvhVOEE61h7GR8jqzNvFt9z9mBySS0of86UAOJIH8eRQC1B'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--273f60d0--eab1--5837--bb33--0c04c9e5b829-osd--block--273f60d0--eab1--5837--bb33--0c04c9e5b829'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vjQyqS-0O2i-oUfi-QrIp-EEvb-mkza-Ay8B4d', 'scsi-0QEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77', 'scsi-SQEMU_QEMU_HARDDISK_0679126a-4000-4d61-a7db-c334b9d13f77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f99b2314--ad51--5797--a71e--17207c9800e6-osd--block--f99b2314--ad51--5797--a71e--17207c9800e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8A021o-0SEM-qE3F-L4Wz-tepU-5Ebc-2TkWkY', 'scsi-0QEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f', 'scsi-SQEMU_QEMU_HARDDISK_9561ecc7-53f2-4f93-a506-8a94937d6a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733472 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05', 'scsi-SQEMU_QEMU_HARDDISK_36e0079f-b8cc-463e-a3d4-692b22821d05'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733491 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.733495 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733510 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a4aafab-1458-40b8-8f4e-2a9f2d8e58a7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4', 'dm-uuid-LVM-wGY9KIRhm7IaVKgPekBld64Nsr4cXFHYTnMbF7axTSTNUFWMfy3NmO8CcXI9BhjY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d', 'dm-uuid-LVM-GnYlbSDmKKf8kqe05EYZgvzXvfiTNv27Pd4xX5u2Umcq5s1KRyrmBZw287rcJfR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733552 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--976187fe--8802--504d--92cd--339995e22605-osd--block--976187fe--8802--504d--92cd--339995e22605'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wPL5U6-PwRf-m1u5-PNtC-WxG6-QRHR-4sCXGb', 'scsi-0QEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605', 'scsi-SQEMU_QEMU_HARDDISK_64ba95e0-52ec-4080-a400-33c71893d605'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733559 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--204a2e69--8032--57e4--80e8--bdb37f98e657-osd--block--204a2e69--8032--57e4--80e8--bdb37f98e657'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XtfceA-Mbkv-edmG-nfsU-T6X9-jaeN-0URiWL', 'scsi-0QEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194', 'scsi-SQEMU_QEMU_HARDDISK_8eda79f4-f653-48ca-bc7b-44aba519c194'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733578 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e', 'scsi-SQEMU_QEMU_HARDDISK_9aa3d683-c16f-4a6c-9923-af2b5f9d7d5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733586 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733594 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.733601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733605 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733609 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4039d428-e5d1-48e6-9940-0f36e423ec3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ae95053f--cfae--50f3--8301--23c2132e6da4-osd--block--ae95053f--cfae--50f3--8301--23c2132e6da4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IkZIFC-QO06-F9OK-5MzU-4gck-L7wj-os076W', 'scsi-0QEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a', 'scsi-SQEMU_QEMU_HARDDISK_2beae69f-4f2c-4ffb-b1cc-4fe56058469a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--42f39a41--1a89--55d6--ba76--16e64e7a2b2d-osd--block--42f39a41--1a89--55d6--ba76--16e64e7a2b2d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g1Hjx0-VEhr-pSSU-0d55-M01s-wOvL-5jZgev', 'scsi-0QEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7', 'scsi-SQEMU_QEMU_HARDDISK_7036bc7f-1d9f-4bbc-89ec-79faed4557a7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923', 'scsi-SQEMU_QEMU_HARDDISK_210099df-3e7f-48c2-8d6b-572e8a7c1923'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:59:56.733661 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.733665 | orchestrator | 2026-04-13 00:59:56.733669 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-13 00:59:56.733673 | orchestrator | Monday 13 April 2026 00:58:17 +0000 (0:00:00.616) 0:00:18.867 ********** 2026-04-13 00:59:56.733677 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.733681 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.733685 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.733689 | orchestrator | 2026-04-13 00:59:56.733693 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-13 00:59:56.733697 | orchestrator | Monday 13 April 2026 00:58:18 +0000 (0:00:00.690) 0:00:19.557 ********** 2026-04-13 00:59:56.733701 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.733705 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.733708 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.733712 | orchestrator | 2026-04-13 00:59:56.733716 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 00:59:56.733720 | orchestrator | Monday 13 April 2026 00:58:18 +0000 (0:00:00.510) 0:00:20.067 ********** 2026-04-13 00:59:56.733724 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.733728 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.733732 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.733736 | orchestrator | 2026-04-13 00:59:56.733740 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 00:59:56.733744 | orchestrator | Monday 13 April 2026 00:58:20 +0000 (0:00:01.686) 0:00:21.754 ********** 2026-04-13 00:59:56.733747 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.733751 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.733755 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.733759 | orchestrator | 2026-04-13 00:59:56.733763 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 00:59:56.733767 | orchestrator | Monday 13 April 2026 00:58:20 +0000 (0:00:00.285) 0:00:22.040 ********** 2026-04-13 00:59:56.733771 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.733775 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.733782 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.733786 | orchestrator | 2026-04-13 00:59:56.733790 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 00:59:56.733794 | orchestrator | Monday 13 April 2026 00:58:20 +0000 (0:00:00.406) 0:00:22.447 ********** 2026-04-13 00:59:56.733798 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.733802 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.733806 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.733810 | orchestrator | 2026-04-13 00:59:56.733814 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-13 00:59:56.733827 | orchestrator | Monday 13 April 2026 00:58:21 +0000 (0:00:00.551) 0:00:22.998 ********** 2026-04-13 00:59:56.733835 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-13 00:59:56.733840 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-13 00:59:56.733846 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-13 00:59:56.733859 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-13 00:59:56.733865 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-13 00:59:56.733871 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-13 00:59:56.733877 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-13 00:59:56.733882 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-13 00:59:56.733888 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-13 00:59:56.733894 | orchestrator | 2026-04-13 00:59:56.733900 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-13 00:59:56.733906 | orchestrator | Monday 13 April 2026 00:58:22 +0000 (0:00:00.866) 0:00:23.865 ********** 2026-04-13 00:59:56.733913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 00:59:56.733919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 00:59:56.733926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 00:59:56.733933 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.733937 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-13 00:59:56.733941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-13 00:59:56.733945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-13 00:59:56.733949 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.733953 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-13 00:59:56.733957 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-13 00:59:56.733960 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-13 00:59:56.733964 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.733968 | orchestrator | 2026-04-13 00:59:56.733972 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-13 00:59:56.733976 | orchestrator | Monday 13 April 2026 00:58:22 +0000 (0:00:00.366) 0:00:24.232 ********** 2026-04-13 00:59:56.733980 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:59:56.733984 | orchestrator | 2026-04-13 00:59:56.733992 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-13 00:59:56.733997 | orchestrator | Monday 13 April 2026 00:58:23 +0000 (0:00:00.697) 0:00:24.929 ********** 2026-04-13 00:59:56.734001 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.734005 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.734009 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.734013 | orchestrator | 2026-04-13 00:59:56.734047 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-13 00:59:56.734052 | orchestrator | Monday 13 April 2026 00:58:23 +0000 (0:00:00.344) 0:00:25.273 ********** 2026-04-13 00:59:56.734055 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.734059 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.734063 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.734067 | orchestrator | 2026-04-13 00:59:56.734071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-13 00:59:56.734075 | orchestrator | Monday 13 April 2026 00:58:24 +0000 (0:00:00.317) 0:00:25.591 ********** 2026-04-13 00:59:56.734079 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.734083 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.734087 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:59:56.734091 | orchestrator | 2026-04-13 00:59:56.734094 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-13 00:59:56.734103 | orchestrator | Monday 13 April 2026 00:58:24 +0000 (0:00:00.332) 0:00:25.924 ********** 2026-04-13 00:59:56.734107 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.734111 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.734115 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.734119 | orchestrator | 2026-04-13 00:59:56.734123 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-13 00:59:56.734126 | orchestrator | Monday 13 April 2026 00:58:25 +0000 (0:00:00.660) 0:00:26.585 ********** 2026-04-13 00:59:56.734130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:59:56.734134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:59:56.734138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:59:56.734142 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.734146 | orchestrator | 2026-04-13 00:59:56.734150 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-13 00:59:56.734154 | orchestrator | Monday 13 April 2026 00:58:25 +0000 (0:00:00.391) 0:00:26.977 ********** 2026-04-13 00:59:56.734158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:59:56.734161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:59:56.734165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:59:56.734169 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.734173 | orchestrator | 2026-04-13 00:59:56.734180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-13 00:59:56.734184 | orchestrator | Monday 13 April 2026 00:58:25 +0000 (0:00:00.359) 0:00:27.337 ********** 2026-04-13 00:59:56.734188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:59:56.734192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:59:56.734196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:59:56.734200 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.734204 | orchestrator | 2026-04-13 00:59:56.734208 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-13 00:59:56.734212 | orchestrator | Monday 13 April 2026 00:58:26 +0000 (0:00:00.394) 0:00:27.731 ********** 2026-04-13 00:59:56.734215 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:59:56.734219 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:59:56.734223 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:59:56.734227 | orchestrator | 2026-04-13 00:59:56.734231 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-13 00:59:56.734235 | orchestrator | Monday 13 April 2026 00:58:26 +0000 (0:00:00.322) 0:00:28.054 ********** 2026-04-13 00:59:56.734239 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-13 00:59:56.734243 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 00:59:56.734247 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-13 00:59:56.734251 | orchestrator | 2026-04-13 00:59:56.734255 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-13 00:59:56.734259 | orchestrator | Monday 13 April 2026 00:58:27 +0000 (0:00:00.520) 0:00:28.574 ********** 2026-04-13 00:59:56.734276 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:59:56.734281 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:59:56.734285 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:59:56.734289 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-13 00:59:56.734293 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 00:59:56.734296 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 00:59:56.734300 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 00:59:56.734304 | orchestrator | 2026-04-13 00:59:56.734308 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-13 00:59:56.734315 | orchestrator | Monday 13 April 2026 00:58:28 +0000 (0:00:01.025) 0:00:29.600 ********** 2026-04-13 00:59:56.734319 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:59:56.734323 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:59:56.734327 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:59:56.734330 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-13 00:59:56.734338 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 00:59:56.734342 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 00:59:56.734346 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 00:59:56.734350 | orchestrator | 2026-04-13 00:59:56.734354 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-13 00:59:56.734358 | orchestrator | Monday 13 April 2026 00:58:30 +0000 (0:00:02.012) 0:00:31.612 ********** 2026-04-13 00:59:56.734362 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:59:56.734365 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:59:56.734372 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-13 00:59:56.734378 | orchestrator | 2026-04-13 00:59:56.734384 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-13 00:59:56.734390 | orchestrator | Monday 13 April 2026 00:58:30 +0000 (0:00:00.389) 0:00:32.002 ********** 2026-04-13 00:59:56.734396 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:59:56.734405 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:59:56.734411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:59:56.734418 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:59:56.734428 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:59:56.734436 | orchestrator | 2026-04-13 00:59:56.734440 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-13 00:59:56.734444 | orchestrator | Monday 13 April 2026 00:59:09 +0000 (0:00:39.173) 0:01:11.175 ********** 2026-04-13 00:59:56.734448 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734451 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734455 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734459 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734463 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734476 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-13 00:59:56.734480 | orchestrator | 2026-04-13 00:59:56.734484 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-13 00:59:56.734488 | orchestrator | Monday 13 April 2026 00:59:28 +0000 (0:00:19.210) 0:01:30.385 ********** 2026-04-13 00:59:56.734491 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734495 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734499 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734503 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734507 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734511 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734515 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:59:56.734519 | orchestrator | 2026-04-13 00:59:56.734523 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-13 00:59:56.734527 | orchestrator | Monday 13 April 2026 00:59:38 +0000 (0:00:09.466) 0:01:39.852 ********** 2026-04-13 00:59:56.734530 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734534 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:59:56.734538 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:59:56.734545 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734550 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:59:56.734554 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:59:56.734558 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734562 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:59:56.734565 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:59:56.734569 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734573 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:59:56.734577 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:59:56.734581 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734585 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:59:56.734589 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:59:56.734593 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:59:56.734597 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:59:56.734601 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:59:56.734605 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-13 00:59:56.734609 | orchestrator | 2026-04-13 00:59:56.734613 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:59:56.734617 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-13 00:59:56.734622 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-13 00:59:56.734629 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-13 00:59:56.734633 | orchestrator | 2026-04-13 00:59:56.734637 | orchestrator | 2026-04-13 00:59:56.734641 | orchestrator | 2026-04-13 00:59:56.734647 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:59:56.734652 | orchestrator | Monday 13 April 2026 00:59:55 +0000 (0:00:17.691) 0:01:57.543 ********** 2026-04-13 00:59:56.734656 | orchestrator | =============================================================================== 2026-04-13 00:59:56.734660 | orchestrator | create openstack pool(s) ----------------------------------------------- 39.17s 2026-04-13 00:59:56.734663 | orchestrator | generate keys ---------------------------------------------------------- 19.21s 2026-04-13 00:59:56.734667 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.69s 2026-04-13 00:59:56.734671 | orchestrator | get keys from monitors -------------------------------------------------- 9.47s 2026-04-13 00:59:56.734675 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.11s 2026-04-13 00:59:56.734679 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.01s 2026-04-13 00:59:56.734683 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.69s 2026-04-13 00:59:56.734687 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.33s 2026-04-13 00:59:56.734691 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.07s 2026-04-13 00:59:56.734695 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.03s 2026-04-13 00:59:56.734699 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-04-13 00:59:56.734703 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2026-04-13 00:59:56.734707 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2026-04-13 00:59:56.734710 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-04-13 00:59:56.734714 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2026-04-13 00:59:56.734718 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2026-04-13 00:59:56.734722 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.69s 2026-04-13 00:59:56.734726 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.66s 2026-04-13 00:59:56.734730 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2026-04-13 00:59:56.734734 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.58s 2026-04-13 00:59:56.734738 | orchestrator | 2026-04-13 00:59:56 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:56.734742 | orchestrator | 2026-04-13 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:59.768686 | orchestrator | 2026-04-13 00:59:59 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 00:59:59.772756 | orchestrator | 2026-04-13 00:59:59 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 00:59:59.773696 | orchestrator | 2026-04-13 00:59:59 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 00:59:59.773838 | orchestrator | 2026-04-13 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:02.821928 | orchestrator | 2026-04-13 01:00:02 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:02.823722 | orchestrator | 2026-04-13 01:00:02 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:02.825604 | orchestrator | 2026-04-13 01:00:02 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:02.825664 | orchestrator | 2026-04-13 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:05.865635 | orchestrator | 2026-04-13 01:00:05 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:05.869119 | orchestrator | 2026-04-13 01:00:05 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:05.870858 | orchestrator | 2026-04-13 01:00:05 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:05.870921 | orchestrator | 2026-04-13 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:08.919810 | orchestrator | 2026-04-13 01:00:08 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:08.921695 | orchestrator | 2026-04-13 01:00:08 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:08.923642 | orchestrator | 2026-04-13 01:00:08 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:08.923805 | orchestrator | 2026-04-13 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:11.966535 | orchestrator | 2026-04-13 01:00:11 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:11.969548 | orchestrator | 2026-04-13 01:00:11 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:11.972998 | orchestrator | 2026-04-13 01:00:11 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:11.973087 | orchestrator | 2026-04-13 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:15.037937 | orchestrator | 2026-04-13 01:00:15 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:15.039123 | orchestrator | 2026-04-13 01:00:15 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:15.039828 | orchestrator | 2026-04-13 01:00:15 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:15.039893 | orchestrator | 2026-04-13 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:18.081579 | orchestrator | 2026-04-13 01:00:18 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:18.083347 | orchestrator | 2026-04-13 01:00:18 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:18.085567 | orchestrator | 2026-04-13 01:00:18 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:18.086777 | orchestrator | 2026-04-13 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:21.156675 | orchestrator | 2026-04-13 01:00:21 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:21.160695 | orchestrator | 2026-04-13 01:00:21 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:21.163752 | orchestrator | 2026-04-13 01:00:21 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:21.164193 | orchestrator | 2026-04-13 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:24.218619 | orchestrator | 2026-04-13 01:00:24 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:24.221223 | orchestrator | 2026-04-13 01:00:24 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:24.224218 | orchestrator | 2026-04-13 01:00:24 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:24.224615 | orchestrator | 2026-04-13 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:27.271761 | orchestrator | 2026-04-13 01:00:27 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:27.272745 | orchestrator | 2026-04-13 01:00:27 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:27.274434 | orchestrator | 2026-04-13 01:00:27 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:27.274701 | orchestrator | 2026-04-13 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:30.333322 | orchestrator | 2026-04-13 01:00:30 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:30.335656 | orchestrator | 2026-04-13 01:00:30 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:30.337960 | orchestrator | 2026-04-13 01:00:30 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:30.338069 | orchestrator | 2026-04-13 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:33.385675 | orchestrator | 2026-04-13 01:00:33 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:33.387948 | orchestrator | 2026-04-13 01:00:33 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:33.390483 | orchestrator | 2026-04-13 01:00:33 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:33.390540 | orchestrator | 2026-04-13 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:36.441285 | orchestrator | 2026-04-13 01:00:36 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:36.443685 | orchestrator | 2026-04-13 01:00:36 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:36.445543 | orchestrator | 2026-04-13 01:00:36 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state STARTED 2026-04-13 01:00:36.445599 | orchestrator | 2026-04-13 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:39.495993 | orchestrator | 2026-04-13 01:00:39 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:39.497824 | orchestrator | 2026-04-13 01:00:39 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:39.500609 | orchestrator | 2026-04-13 01:00:39 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:00:39.502589 | orchestrator | 2026-04-13 01:00:39 | INFO  | Task 21546b71-c934-4d50-940d-64b9980e95a5 is in state SUCCESS 2026-04-13 01:00:39.502662 | orchestrator | 2026-04-13 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:42.539178 | orchestrator | 2026-04-13 01:00:42 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:42.540829 | orchestrator | 2026-04-13 01:00:42 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:42.542692 | orchestrator | 2026-04-13 01:00:42 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:00:42.542729 | orchestrator | 2026-04-13 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:45.585084 | orchestrator | 2026-04-13 01:00:45 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:45.586369 | orchestrator | 2026-04-13 01:00:45 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:45.588507 | orchestrator | 2026-04-13 01:00:45 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:00:45.588722 | orchestrator | 2026-04-13 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:48.647371 | orchestrator | 2026-04-13 01:00:48 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:48.651792 | orchestrator | 2026-04-13 01:00:48 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:48.653479 | orchestrator | 2026-04-13 01:00:48 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:00:48.653530 | orchestrator | 2026-04-13 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:51.697697 | orchestrator | 2026-04-13 01:00:51 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:51.699346 | orchestrator | 2026-04-13 01:00:51 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:51.700761 | orchestrator | 2026-04-13 01:00:51 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:00:51.702851 | orchestrator | 2026-04-13 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:54.756912 | orchestrator | 2026-04-13 01:00:54 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:54.758506 | orchestrator | 2026-04-13 01:00:54 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:54.761155 | orchestrator | 2026-04-13 01:00:54 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:00:54.761215 | orchestrator | 2026-04-13 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:57.812301 | orchestrator | 2026-04-13 01:00:57 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:00:57.814488 | orchestrator | 2026-04-13 01:00:57 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:00:57.816291 | orchestrator | 2026-04-13 01:00:57 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:00:57.816359 | orchestrator | 2026-04-13 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:00.870702 | orchestrator | 2026-04-13 01:01:00 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:00.871738 | orchestrator | 2026-04-13 01:01:00 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:00.874003 | orchestrator | 2026-04-13 01:01:00 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:00.874133 | orchestrator | 2026-04-13 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:03.932049 | orchestrator | 2026-04-13 01:01:03 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:03.935011 | orchestrator | 2026-04-13 01:01:03 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:03.938274 | orchestrator | 2026-04-13 01:01:03 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:03.938342 | orchestrator | 2026-04-13 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:06.984697 | orchestrator | 2026-04-13 01:01:06 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:06.986880 | orchestrator | 2026-04-13 01:01:06 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:06.988552 | orchestrator | 2026-04-13 01:01:06 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:06.989302 | orchestrator | 2026-04-13 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:10.032892 | orchestrator | 2026-04-13 01:01:10 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:10.033877 | orchestrator | 2026-04-13 01:01:10 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:10.035312 | orchestrator | 2026-04-13 01:01:10 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:10.035353 | orchestrator | 2026-04-13 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:13.104667 | orchestrator | 2026-04-13 01:01:13 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:13.105599 | orchestrator | 2026-04-13 01:01:13 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:13.107027 | orchestrator | 2026-04-13 01:01:13 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:13.107067 | orchestrator | 2026-04-13 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:16.152894 | orchestrator | 2026-04-13 01:01:16 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:16.154544 | orchestrator | 2026-04-13 01:01:16 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:16.159268 | orchestrator | 2026-04-13 01:01:16 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:16.160621 | orchestrator | 2026-04-13 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:19.204783 | orchestrator | 2026-04-13 01:01:19 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:19.206806 | orchestrator | 2026-04-13 01:01:19 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:19.208609 | orchestrator | 2026-04-13 01:01:19 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:19.208648 | orchestrator | 2026-04-13 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:22.255766 | orchestrator | 2026-04-13 01:01:22 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:22.256642 | orchestrator | 2026-04-13 01:01:22 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state STARTED 2026-04-13 01:01:22.257982 | orchestrator | 2026-04-13 01:01:22 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:22.258075 | orchestrator | 2026-04-13 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:25.303557 | orchestrator | 2026-04-13 01:01:25 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:25.307284 | orchestrator | 2026-04-13 01:01:25 | INFO  | Task 363a5af4-fe55-4869-9a38-4129c894fb3e is in state SUCCESS 2026-04-13 01:01:25.307476 | orchestrator | 2026-04-13 01:01:25.307489 | orchestrator | 2026-04-13 01:01:25.307494 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-13 01:01:25.307499 | orchestrator | 2026-04-13 01:01:25.307503 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-13 01:01:25.307508 | orchestrator | Monday 13 April 2026 00:59:59 +0000 (0:00:00.244) 0:00:00.244 ********** 2026-04-13 01:01:25.307512 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-13 01:01:25.307517 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307522 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307527 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:01:25.307549 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307553 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-13 01:01:25.307557 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-13 01:01:25.307562 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:01:25.307566 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-13 01:01:25.307570 | orchestrator | 2026-04-13 01:01:25.307584 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-13 01:01:25.307588 | orchestrator | Monday 13 April 2026 01:00:04 +0000 (0:00:04.842) 0:00:05.087 ********** 2026-04-13 01:01:25.307592 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-13 01:01:25.307596 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307600 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307604 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:01:25.307608 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307612 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-13 01:01:25.307616 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-13 01:01:25.307620 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:01:25.307625 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-13 01:01:25.307629 | orchestrator | 2026-04-13 01:01:25.307633 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-13 01:01:25.307637 | orchestrator | Monday 13 April 2026 01:00:08 +0000 (0:00:04.158) 0:00:09.245 ********** 2026-04-13 01:01:25.307642 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 01:01:25.307646 | orchestrator | 2026-04-13 01:01:25.307650 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-13 01:01:25.307654 | orchestrator | Monday 13 April 2026 01:00:09 +0000 (0:00:01.144) 0:00:10.390 ********** 2026-04-13 01:01:25.307658 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-13 01:01:25.307662 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307666 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307670 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:01:25.307674 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307678 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-13 01:01:25.307683 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-13 01:01:25.307687 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:01:25.307691 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-13 01:01:25.307695 | orchestrator | 2026-04-13 01:01:25.307699 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-13 01:01:25.307703 | orchestrator | Monday 13 April 2026 01:00:26 +0000 (0:00:16.392) 0:00:26.782 ********** 2026-04-13 01:01:25.307711 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-13 01:01:25.307716 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-13 01:01:25.307720 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-13 01:01:25.307724 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-13 01:01:25.307734 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-13 01:01:25.307738 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-13 01:01:25.307742 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-13 01:01:25.307746 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-13 01:01:25.307750 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-13 01:01:25.307754 | orchestrator | 2026-04-13 01:01:25.307758 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-13 01:01:25.307762 | orchestrator | Monday 13 April 2026 01:00:29 +0000 (0:00:03.518) 0:00:30.301 ********** 2026-04-13 01:01:25.307767 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-13 01:01:25.307771 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307775 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307779 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:01:25.307783 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-13 01:01:25.307787 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-13 01:01:25.307791 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-13 01:01:25.307795 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:01:25.307802 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-13 01:01:25.307806 | orchestrator | 2026-04-13 01:01:25.307810 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:01:25.307814 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:01:25.307819 | orchestrator | 2026-04-13 01:01:25.307823 | orchestrator | 2026-04-13 01:01:25.307827 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:01:25.307831 | orchestrator | Monday 13 April 2026 01:00:37 +0000 (0:00:07.356) 0:00:37.657 ********** 2026-04-13 01:01:25.307835 | orchestrator | =============================================================================== 2026-04-13 01:01:25.307839 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.39s 2026-04-13 01:01:25.307843 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.36s 2026-04-13 01:01:25.307847 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.84s 2026-04-13 01:01:25.307851 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.16s 2026-04-13 01:01:25.307855 | orchestrator | Check if target directories exist --------------------------------------- 3.52s 2026-04-13 01:01:25.307859 | orchestrator | Create share directory -------------------------------------------------- 1.14s 2026-04-13 01:01:25.307864 | orchestrator | 2026-04-13 01:01:25.309719 | orchestrator | 2026-04-13 01:01:25.309740 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:01:25.309746 | orchestrator | 2026-04-13 01:01:25.309751 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:01:25.309765 | orchestrator | Monday 13 April 2026 00:59:25 +0000 (0:00:00.383) 0:00:00.384 ********** 2026-04-13 01:01:25.309770 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.309776 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.309780 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.309785 | orchestrator | 2026-04-13 01:01:25.309789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:01:25.309794 | orchestrator | Monday 13 April 2026 00:59:26 +0000 (0:00:00.320) 0:00:00.704 ********** 2026-04-13 01:01:25.309799 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-13 01:01:25.309804 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-13 01:01:25.309808 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-13 01:01:25.309813 | orchestrator | 2026-04-13 01:01:25.309818 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-13 01:01:25.309822 | orchestrator | 2026-04-13 01:01:25.309827 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 01:01:25.309831 | orchestrator | Monday 13 April 2026 00:59:26 +0000 (0:00:00.321) 0:00:01.026 ********** 2026-04-13 01:01:25.309836 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:01:25.309841 | orchestrator | 2026-04-13 01:01:25.309846 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-13 01:01:25.309850 | orchestrator | Monday 13 April 2026 00:59:27 +0000 (0:00:00.731) 0:00:01.758 ********** 2026-04-13 01:01:25.309865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.309881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.309892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.309898 | orchestrator | 2026-04-13 01:01:25.309903 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-13 01:01:25.309911 | orchestrator | Monday 13 April 2026 00:59:28 +0000 (0:00:01.344) 0:00:03.103 ********** 2026-04-13 01:01:25.309916 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.309920 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.309925 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.309929 | orchestrator | 2026-04-13 01:01:25.309933 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 01:01:25.309938 | orchestrator | Monday 13 April 2026 00:59:29 +0000 (0:00:00.367) 0:00:03.470 ********** 2026-04-13 01:01:25.309942 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-13 01:01:25.309950 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-13 01:01:25.309954 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-13 01:01:25.309959 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-13 01:01:25.309963 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-13 01:01:25.309968 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-13 01:01:25.309998 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-13 01:01:25.310003 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-13 01:01:25.310008 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-13 01:01:25.310012 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-13 01:01:25.310075 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-13 01:01:25.310081 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-13 01:01:25.310088 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-13 01:01:25.310094 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-13 01:01:25.310101 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-13 01:01:25.310107 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-13 01:01:25.310114 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-13 01:01:25.310120 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-13 01:01:25.310126 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-13 01:01:25.310133 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-13 01:01:25.310139 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-13 01:01:25.310146 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-13 01:01:25.310153 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-13 01:01:25.310160 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-13 01:01:25.310169 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-13 01:01:25.310178 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-13 01:01:25.310185 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-13 01:01:25.310193 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-13 01:01:25.310204 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-13 01:01:25.310236 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-13 01:01:25.310245 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-13 01:01:25.310249 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-13 01:01:25.310254 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-13 01:01:25.310259 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-13 01:01:25.310264 | orchestrator | 2026-04-13 01:01:25.310268 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310273 | orchestrator | Monday 13 April 2026 00:59:29 +0000 (0:00:00.783) 0:00:04.253 ********** 2026-04-13 01:01:25.310277 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310282 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310286 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310291 | orchestrator | 2026-04-13 01:01:25.310296 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310300 | orchestrator | Monday 13 April 2026 00:59:30 +0000 (0:00:00.461) 0:00:04.715 ********** 2026-04-13 01:01:25.310305 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310309 | orchestrator | 2026-04-13 01:01:25.310319 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310323 | orchestrator | Monday 13 April 2026 00:59:30 +0000 (0:00:00.146) 0:00:04.862 ********** 2026-04-13 01:01:25.310328 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310332 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310337 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310341 | orchestrator | 2026-04-13 01:01:25.310345 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310350 | orchestrator | Monday 13 April 2026 00:59:30 +0000 (0:00:00.293) 0:00:05.155 ********** 2026-04-13 01:01:25.310354 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310429 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310437 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310442 | orchestrator | 2026-04-13 01:01:25.310446 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310451 | orchestrator | Monday 13 April 2026 00:59:31 +0000 (0:00:00.284) 0:00:05.439 ********** 2026-04-13 01:01:25.310456 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310460 | orchestrator | 2026-04-13 01:01:25.310465 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310469 | orchestrator | Monday 13 April 2026 00:59:31 +0000 (0:00:00.149) 0:00:05.588 ********** 2026-04-13 01:01:25.310474 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310478 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310483 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310487 | orchestrator | 2026-04-13 01:01:25.310492 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310496 | orchestrator | Monday 13 April 2026 00:59:31 +0000 (0:00:00.490) 0:00:06.078 ********** 2026-04-13 01:01:25.310501 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310505 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310509 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310514 | orchestrator | 2026-04-13 01:01:25.310524 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310528 | orchestrator | Monday 13 April 2026 00:59:31 +0000 (0:00:00.340) 0:00:06.419 ********** 2026-04-13 01:01:25.310533 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310537 | orchestrator | 2026-04-13 01:01:25.310542 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310546 | orchestrator | Monday 13 April 2026 00:59:32 +0000 (0:00:00.115) 0:00:06.535 ********** 2026-04-13 01:01:25.310551 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310555 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310560 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310564 | orchestrator | 2026-04-13 01:01:25.310569 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310573 | orchestrator | Monday 13 April 2026 00:59:32 +0000 (0:00:00.309) 0:00:06.844 ********** 2026-04-13 01:01:25.310578 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310582 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310587 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310592 | orchestrator | 2026-04-13 01:01:25.310596 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310601 | orchestrator | Monday 13 April 2026 00:59:32 +0000 (0:00:00.301) 0:00:07.146 ********** 2026-04-13 01:01:25.310605 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310610 | orchestrator | 2026-04-13 01:01:25.310614 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310619 | orchestrator | Monday 13 April 2026 00:59:32 +0000 (0:00:00.137) 0:00:07.283 ********** 2026-04-13 01:01:25.310623 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310628 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310632 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310637 | orchestrator | 2026-04-13 01:01:25.310641 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310646 | orchestrator | Monday 13 April 2026 00:59:33 +0000 (0:00:00.504) 0:00:07.788 ********** 2026-04-13 01:01:25.310650 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310655 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310659 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310663 | orchestrator | 2026-04-13 01:01:25.310668 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310673 | orchestrator | Monday 13 April 2026 00:59:33 +0000 (0:00:00.323) 0:00:08.111 ********** 2026-04-13 01:01:25.310677 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310681 | orchestrator | 2026-04-13 01:01:25.310686 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310694 | orchestrator | Monday 13 April 2026 00:59:33 +0000 (0:00:00.145) 0:00:08.256 ********** 2026-04-13 01:01:25.310698 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310703 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310707 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310712 | orchestrator | 2026-04-13 01:01:25.310716 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310721 | orchestrator | Monday 13 April 2026 00:59:34 +0000 (0:00:00.351) 0:00:08.608 ********** 2026-04-13 01:01:25.310725 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310730 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310734 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310739 | orchestrator | 2026-04-13 01:01:25.310743 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310748 | orchestrator | Monday 13 April 2026 00:59:34 +0000 (0:00:00.535) 0:00:09.143 ********** 2026-04-13 01:01:25.310752 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310757 | orchestrator | 2026-04-13 01:01:25.310761 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310766 | orchestrator | Monday 13 April 2026 00:59:34 +0000 (0:00:00.132) 0:00:09.275 ********** 2026-04-13 01:01:25.310774 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310778 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310783 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310787 | orchestrator | 2026-04-13 01:01:25.310792 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310800 | orchestrator | Monday 13 April 2026 00:59:35 +0000 (0:00:00.274) 0:00:09.549 ********** 2026-04-13 01:01:25.310804 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310809 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310813 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310818 | orchestrator | 2026-04-13 01:01:25.310822 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310827 | orchestrator | Monday 13 April 2026 00:59:35 +0000 (0:00:00.288) 0:00:09.838 ********** 2026-04-13 01:01:25.310831 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310836 | orchestrator | 2026-04-13 01:01:25.310840 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310845 | orchestrator | Monday 13 April 2026 00:59:35 +0000 (0:00:00.135) 0:00:09.974 ********** 2026-04-13 01:01:25.310849 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310854 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310858 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310863 | orchestrator | 2026-04-13 01:01:25.310867 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310872 | orchestrator | Monday 13 April 2026 00:59:35 +0000 (0:00:00.285) 0:00:10.259 ********** 2026-04-13 01:01:25.310876 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310881 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310885 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310890 | orchestrator | 2026-04-13 01:01:25.310894 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310899 | orchestrator | Monday 13 April 2026 00:59:36 +0000 (0:00:00.520) 0:00:10.780 ********** 2026-04-13 01:01:25.310903 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310908 | orchestrator | 2026-04-13 01:01:25.310912 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310917 | orchestrator | Monday 13 April 2026 00:59:36 +0000 (0:00:00.144) 0:00:10.925 ********** 2026-04-13 01:01:25.310921 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310926 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.310930 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.310935 | orchestrator | 2026-04-13 01:01:25.310939 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.310944 | orchestrator | Monday 13 April 2026 00:59:36 +0000 (0:00:00.316) 0:00:11.241 ********** 2026-04-13 01:01:25.310949 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.310953 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.310958 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.310962 | orchestrator | 2026-04-13 01:01:25.310966 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.310971 | orchestrator | Monday 13 April 2026 00:59:37 +0000 (0:00:00.311) 0:00:11.553 ********** 2026-04-13 01:01:25.310975 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310980 | orchestrator | 2026-04-13 01:01:25.310984 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.310989 | orchestrator | Monday 13 April 2026 00:59:37 +0000 (0:00:00.140) 0:00:11.694 ********** 2026-04-13 01:01:25.310993 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.310998 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.311003 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.311007 | orchestrator | 2026-04-13 01:01:25.311012 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 01:01:25.311016 | orchestrator | Monday 13 April 2026 00:59:37 +0000 (0:00:00.282) 0:00:11.976 ********** 2026-04-13 01:01:25.311028 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:01:25.311033 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:01:25.311037 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:01:25.311042 | orchestrator | 2026-04-13 01:01:25.311046 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 01:01:25.311051 | orchestrator | Monday 13 April 2026 00:59:38 +0000 (0:00:00.511) 0:00:12.487 ********** 2026-04-13 01:01:25.311055 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.311060 | orchestrator | 2026-04-13 01:01:25.311064 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 01:01:25.311069 | orchestrator | Monday 13 April 2026 00:59:38 +0000 (0:00:00.176) 0:00:12.664 ********** 2026-04-13 01:01:25.311073 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.311078 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.311082 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.311087 | orchestrator | 2026-04-13 01:01:25.311091 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-13 01:01:25.311096 | orchestrator | Monday 13 April 2026 00:59:38 +0000 (0:00:00.297) 0:00:12.961 ********** 2026-04-13 01:01:25.311100 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:01:25.311105 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:01:25.311112 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:01:25.311117 | orchestrator | 2026-04-13 01:01:25.311121 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-13 01:01:25.311126 | orchestrator | Monday 13 April 2026 00:59:40 +0000 (0:00:01.919) 0:00:14.880 ********** 2026-04-13 01:01:25.311130 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-13 01:01:25.311135 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-13 01:01:25.311139 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-13 01:01:25.311144 | orchestrator | 2026-04-13 01:01:25.311148 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-13 01:01:25.311153 | orchestrator | Monday 13 April 2026 00:59:42 +0000 (0:00:02.399) 0:00:17.280 ********** 2026-04-13 01:01:25.311157 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-13 01:01:25.311162 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-13 01:01:25.311167 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-13 01:01:25.311171 | orchestrator | 2026-04-13 01:01:25.311176 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-13 01:01:25.311183 | orchestrator | Monday 13 April 2026 00:59:45 +0000 (0:00:02.213) 0:00:19.493 ********** 2026-04-13 01:01:25.311188 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-13 01:01:25.311193 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-13 01:01:25.311197 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-13 01:01:25.311202 | orchestrator | 2026-04-13 01:01:25.311222 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-13 01:01:25.311227 | orchestrator | Monday 13 April 2026 00:59:46 +0000 (0:00:01.622) 0:00:21.116 ********** 2026-04-13 01:01:25.311232 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.311236 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.311241 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.311245 | orchestrator | 2026-04-13 01:01:25.311250 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-13 01:01:25.311254 | orchestrator | Monday 13 April 2026 00:59:46 +0000 (0:00:00.266) 0:00:21.383 ********** 2026-04-13 01:01:25.311258 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.311267 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.311272 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.311276 | orchestrator | 2026-04-13 01:01:25.311280 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 01:01:25.311285 | orchestrator | Monday 13 April 2026 00:59:47 +0000 (0:00:00.276) 0:00:21.659 ********** 2026-04-13 01:01:25.311289 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:01:25.311294 | orchestrator | 2026-04-13 01:01:25.311298 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-13 01:01:25.311303 | orchestrator | Monday 13 April 2026 00:59:48 +0000 (0:00:00.813) 0:00:22.473 ********** 2026-04-13 01:01:25.311312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.311323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.311335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.311340 | orchestrator | 2026-04-13 01:01:25.311345 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-13 01:01:25.311349 | orchestrator | Monday 13 April 2026 00:59:49 +0000 (0:00:01.537) 0:00:24.011 ********** 2026-04-13 01:01:25.311358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 01:01:25.311366 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.311388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 01:01:25.311394 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.311399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 01:01:25.311407 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.311412 | orchestrator | 2026-04-13 01:01:25.311416 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-13 01:01:25.311421 | orchestrator | Monday 13 April 2026 00:59:50 +0000 (0:00:00.855) 0:00:24.866 ********** 2026-04-13 01:01:25.311432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 01:01:25.311440 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.311445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 01:01:25.311450 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.311463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 01:01:25.311471 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.311476 | orchestrator | 2026-04-13 01:01:25.311480 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-13 01:01:25.311485 | orchestrator | Monday 13 April 2026 00:59:51 +0000 (0:00:01.133) 0:00:26.000 ********** 2026-04-13 01:01:25.311493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.311502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.311518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 01:01:25.311523 | orchestrator | 2026-04-13 01:01:25.311528 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 01:01:25.311532 | orchestrator | Monday 13 April 2026 00:59:52 +0000 (0:00:01.202) 0:00:27.202 ********** 2026-04-13 01:01:25.311537 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:01:25.311542 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:01:25.311546 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:01:25.311551 | orchestrator | 2026-04-13 01:01:25.311558 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 01:01:25.311563 | orchestrator | Monday 13 April 2026 00:59:53 +0000 (0:00:00.315) 0:00:27.518 ********** 2026-04-13 01:01:25.311568 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:01:25.311572 | orchestrator | 2026-04-13 01:01:25.311577 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-13 01:01:25.311584 | orchestrator | Monday 13 April 2026 00:59:53 +0000 (0:00:00.746) 0:00:28.264 ********** 2026-04-13 01:01:25.311588 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:01:25.311593 | orchestrator | 2026-04-13 01:01:25.311597 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-13 01:01:25.311602 | orchestrator | Monday 13 April 2026 00:59:56 +0000 (0:00:02.213) 0:00:30.478 ********** 2026-04-13 01:01:25.311606 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:01:25.311611 | orchestrator | 2026-04-13 01:01:25.311615 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-13 01:01:25.311620 | orchestrator | Monday 13 April 2026 00:59:58 +0000 (0:00:02.262) 0:00:32.741 ********** 2026-04-13 01:01:25.311624 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:01:25.311629 | orchestrator | 2026-04-13 01:01:25.311633 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-13 01:01:25.311638 | orchestrator | Monday 13 April 2026 01:00:14 +0000 (0:00:16.592) 0:00:49.333 ********** 2026-04-13 01:01:25.311642 | orchestrator | 2026-04-13 01:01:25.311646 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-13 01:01:25.311651 | orchestrator | Monday 13 April 2026 01:00:14 +0000 (0:00:00.069) 0:00:49.403 ********** 2026-04-13 01:01:25.311655 | orchestrator | 2026-04-13 01:01:25.311660 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-13 01:01:25.311664 | orchestrator | Monday 13 April 2026 01:00:15 +0000 (0:00:00.084) 0:00:49.488 ********** 2026-04-13 01:01:25.311669 | orchestrator | 2026-04-13 01:01:25.311673 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-13 01:01:25.311678 | orchestrator | Monday 13 April 2026 01:00:15 +0000 (0:00:00.069) 0:00:49.558 ********** 2026-04-13 01:01:25.311682 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:01:25.311687 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:01:25.311691 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:01:25.311696 | orchestrator | 2026-04-13 01:01:25.311700 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:01:25.311705 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-13 01:01:25.311709 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-13 01:01:25.311714 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-13 01:01:25.311718 | orchestrator | 2026-04-13 01:01:25.311723 | orchestrator | 2026-04-13 01:01:25.311727 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:01:25.311732 | orchestrator | Monday 13 April 2026 01:01:22 +0000 (0:01:06.971) 0:01:56.529 ********** 2026-04-13 01:01:25.311736 | orchestrator | =============================================================================== 2026-04-13 01:01:25.311741 | orchestrator | horizon : Restart horizon container ------------------------------------ 66.97s 2026-04-13 01:01:25.311745 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.59s 2026-04-13 01:01:25.311750 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.40s 2026-04-13 01:01:25.311754 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.26s 2026-04-13 01:01:25.311759 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.21s 2026-04-13 01:01:25.311766 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.21s 2026-04-13 01:01:25.311771 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.92s 2026-04-13 01:01:25.311775 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.62s 2026-04-13 01:01:25.311780 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.54s 2026-04-13 01:01:25.311784 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.34s 2026-04-13 01:01:25.311789 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.20s 2026-04-13 01:01:25.311793 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.13s 2026-04-13 01:01:25.311800 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.86s 2026-04-13 01:01:25.311805 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-04-13 01:01:25.311809 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2026-04-13 01:01:25.311814 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-04-13 01:01:25.311818 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-04-13 01:01:25.311823 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-04-13 01:01:25.311827 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-04-13 01:01:25.311832 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-04-13 01:01:25.311836 | orchestrator | 2026-04-13 01:01:25 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:25.311841 | orchestrator | 2026-04-13 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:28.362734 | orchestrator | 2026-04-13 01:01:28 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:28.364574 | orchestrator | 2026-04-13 01:01:28 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:28.364777 | orchestrator | 2026-04-13 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:31.413671 | orchestrator | 2026-04-13 01:01:31 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:31.415367 | orchestrator | 2026-04-13 01:01:31 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:31.415412 | orchestrator | 2026-04-13 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:34.470308 | orchestrator | 2026-04-13 01:01:34 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:34.473004 | orchestrator | 2026-04-13 01:01:34 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:34.473058 | orchestrator | 2026-04-13 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:37.520010 | orchestrator | 2026-04-13 01:01:37 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:37.520915 | orchestrator | 2026-04-13 01:01:37 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:37.520955 | orchestrator | 2026-04-13 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:40.574354 | orchestrator | 2026-04-13 01:01:40 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:40.575073 | orchestrator | 2026-04-13 01:01:40 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state STARTED 2026-04-13 01:01:40.575103 | orchestrator | 2026-04-13 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:43.623751 | orchestrator | 2026-04-13 01:01:43 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:01:43.624983 | orchestrator | 2026-04-13 01:01:43 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:43.627188 | orchestrator | 2026-04-13 01:01:43 | INFO  | Task de27bbc2-34a5-4af2-bae9-60c945ec51e6 is in state STARTED 2026-04-13 01:01:43.630332 | orchestrator | 2026-04-13 01:01:43 | INFO  | Task 3343b7d3-bc47-4ca5-b745-db3bc1d2ce96 is in state SUCCESS 2026-04-13 01:01:43.631936 | orchestrator | 2026-04-13 01:01:43 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:01:43.632370 | orchestrator | 2026-04-13 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:46.681276 | orchestrator | 2026-04-13 01:01:46 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:01:46.682587 | orchestrator | 2026-04-13 01:01:46 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:46.683837 | orchestrator | 2026-04-13 01:01:46 | INFO  | Task de27bbc2-34a5-4af2-bae9-60c945ec51e6 is in state STARTED 2026-04-13 01:01:46.686463 | orchestrator | 2026-04-13 01:01:46 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:01:46.686507 | orchestrator | 2026-04-13 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:49.731625 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:01:49.734449 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:49.737429 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task de27bbc2-34a5-4af2-bae9-60c945ec51e6 is in state SUCCESS 2026-04-13 01:01:49.744424 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:01:49.744897 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:01:49.746233 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:01:49.746275 | orchestrator | 2026-04-13 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:52.841742 | orchestrator | 2026-04-13 01:01:52 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:01:52.842353 | orchestrator | 2026-04-13 01:01:52 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:52.843429 | orchestrator | 2026-04-13 01:01:52 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:01:52.844850 | orchestrator | 2026-04-13 01:01:52 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:01:52.846237 | orchestrator | 2026-04-13 01:01:52 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:01:52.846298 | orchestrator | 2026-04-13 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:55.883136 | orchestrator | 2026-04-13 01:01:55 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:01:55.883289 | orchestrator | 2026-04-13 01:01:55 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:55.884153 | orchestrator | 2026-04-13 01:01:55 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:01:55.886436 | orchestrator | 2026-04-13 01:01:55 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:01:55.887310 | orchestrator | 2026-04-13 01:01:55 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:01:55.887440 | orchestrator | 2026-04-13 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:58.930832 | orchestrator | 2026-04-13 01:01:58 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:01:58.932340 | orchestrator | 2026-04-13 01:01:58 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:01:58.934140 | orchestrator | 2026-04-13 01:01:58 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:01:58.935784 | orchestrator | 2026-04-13 01:01:58 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:01:58.937288 | orchestrator | 2026-04-13 01:01:58 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:01:58.937340 | orchestrator | 2026-04-13 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:01.979959 | orchestrator | 2026-04-13 01:02:01 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:01.980434 | orchestrator | 2026-04-13 01:02:01 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:02:01.985006 | orchestrator | 2026-04-13 01:02:01 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:01.985738 | orchestrator | 2026-04-13 01:02:01 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:01.986583 | orchestrator | 2026-04-13 01:02:01 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:01.986777 | orchestrator | 2026-04-13 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:05.035777 | orchestrator | 2026-04-13 01:02:05 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:05.036386 | orchestrator | 2026-04-13 01:02:05 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:02:05.037299 | orchestrator | 2026-04-13 01:02:05 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:05.038558 | orchestrator | 2026-04-13 01:02:05 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:05.039594 | orchestrator | 2026-04-13 01:02:05 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:05.039629 | orchestrator | 2026-04-13 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:08.091602 | orchestrator | 2026-04-13 01:02:08 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:08.092890 | orchestrator | 2026-04-13 01:02:08 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:02:08.094705 | orchestrator | 2026-04-13 01:02:08 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:08.096266 | orchestrator | 2026-04-13 01:02:08 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:08.097386 | orchestrator | 2026-04-13 01:02:08 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:08.098079 | orchestrator | 2026-04-13 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:11.150485 | orchestrator | 2026-04-13 01:02:11 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:11.153685 | orchestrator | 2026-04-13 01:02:11 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:02:11.156477 | orchestrator | 2026-04-13 01:02:11 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:11.158151 | orchestrator | 2026-04-13 01:02:11 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:11.160007 | orchestrator | 2026-04-13 01:02:11 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:11.160056 | orchestrator | 2026-04-13 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:14.613779 | orchestrator | 2026-04-13 01:02:14 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:14.613888 | orchestrator | 2026-04-13 01:02:14 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:02:14.613905 | orchestrator | 2026-04-13 01:02:14 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:14.613918 | orchestrator | 2026-04-13 01:02:14 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:14.613929 | orchestrator | 2026-04-13 01:02:14 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:14.613941 | orchestrator | 2026-04-13 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:17.649811 | orchestrator | 2026-04-13 01:02:17 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:17.649900 | orchestrator | 2026-04-13 01:02:17 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:02:17.649913 | orchestrator | 2026-04-13 01:02:17 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:17.649924 | orchestrator | 2026-04-13 01:02:17 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:17.649934 | orchestrator | 2026-04-13 01:02:17 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:17.649945 | orchestrator | 2026-04-13 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:20.646668 | orchestrator | 2026-04-13 01:02:20 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:20.646781 | orchestrator | 2026-04-13 01:02:20 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state STARTED 2026-04-13 01:02:20.646807 | orchestrator | 2026-04-13 01:02:20 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:20.646825 | orchestrator | 2026-04-13 01:02:20 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:20.646846 | orchestrator | 2026-04-13 01:02:20 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:20.646865 | orchestrator | 2026-04-13 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:23.679555 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:23.680402 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task ee70577a-6daa-4c4d-8739-fbb0d093766c is in state SUCCESS 2026-04-13 01:02:23.682243 | orchestrator | 2026-04-13 01:02:23.682313 | orchestrator | 2026-04-13 01:02:23.682330 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-13 01:02:23.682343 | orchestrator | 2026-04-13 01:02:23.682355 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-13 01:02:23.682367 | orchestrator | Monday 13 April 2026 01:00:41 +0000 (0:00:00.427) 0:00:00.427 ********** 2026-04-13 01:02:23.682379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-13 01:02:23.682391 | orchestrator | 2026-04-13 01:02:23.682402 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-13 01:02:23.682456 | orchestrator | Monday 13 April 2026 01:00:41 +0000 (0:00:00.307) 0:00:00.735 ********** 2026-04-13 01:02:23.682470 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-13 01:02:23.682482 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-13 01:02:23.682493 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-13 01:02:23.682505 | orchestrator | 2026-04-13 01:02:23.682516 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-13 01:02:23.682527 | orchestrator | Monday 13 April 2026 01:00:43 +0000 (0:00:02.059) 0:00:02.795 ********** 2026-04-13 01:02:23.682539 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-13 01:02:23.682550 | orchestrator | 2026-04-13 01:02:23.682562 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-13 01:02:23.682573 | orchestrator | Monday 13 April 2026 01:00:44 +0000 (0:00:01.253) 0:00:04.049 ********** 2026-04-13 01:02:23.682584 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:23.682595 | orchestrator | 2026-04-13 01:02:23.682606 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-13 01:02:23.682618 | orchestrator | Monday 13 April 2026 01:00:45 +0000 (0:00:00.954) 0:00:05.003 ********** 2026-04-13 01:02:23.682629 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:23.682640 | orchestrator | 2026-04-13 01:02:23.682651 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-13 01:02:23.682663 | orchestrator | Monday 13 April 2026 01:00:46 +0000 (0:00:00.966) 0:00:05.969 ********** 2026-04-13 01:02:23.682674 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-13 01:02:23.682685 | orchestrator | ok: [testbed-manager] 2026-04-13 01:02:23.682696 | orchestrator | 2026-04-13 01:02:23.682707 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-13 01:02:23.682719 | orchestrator | Monday 13 April 2026 01:01:30 +0000 (0:00:43.935) 0:00:49.904 ********** 2026-04-13 01:02:23.682730 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-13 01:02:23.682741 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-13 01:02:23.682752 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-13 01:02:23.682763 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-13 01:02:23.682775 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-13 01:02:23.682786 | orchestrator | 2026-04-13 01:02:23.682797 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-13 01:02:23.682808 | orchestrator | Monday 13 April 2026 01:01:35 +0000 (0:00:04.383) 0:00:54.288 ********** 2026-04-13 01:02:23.682820 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-13 01:02:23.682831 | orchestrator | 2026-04-13 01:02:23.682842 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-13 01:02:23.682853 | orchestrator | Monday 13 April 2026 01:01:35 +0000 (0:00:00.714) 0:00:55.002 ********** 2026-04-13 01:02:23.682864 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:02:23.682876 | orchestrator | 2026-04-13 01:02:23.682887 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-13 01:02:23.682898 | orchestrator | Monday 13 April 2026 01:01:35 +0000 (0:00:00.128) 0:00:55.131 ********** 2026-04-13 01:02:23.682909 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:02:23.682920 | orchestrator | 2026-04-13 01:02:23.682932 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-13 01:02:23.682943 | orchestrator | Monday 13 April 2026 01:01:36 +0000 (0:00:00.316) 0:00:55.447 ********** 2026-04-13 01:02:23.682955 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:23.682966 | orchestrator | 2026-04-13 01:02:23.682977 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-13 01:02:23.682988 | orchestrator | Monday 13 April 2026 01:01:37 +0000 (0:00:01.586) 0:00:57.033 ********** 2026-04-13 01:02:23.683013 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:23.683034 | orchestrator | 2026-04-13 01:02:23.683053 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-13 01:02:23.683135 | orchestrator | Monday 13 April 2026 01:01:38 +0000 (0:00:00.756) 0:00:57.790 ********** 2026-04-13 01:02:23.683162 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:23.683206 | orchestrator | 2026-04-13 01:02:23.683226 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-13 01:02:23.683244 | orchestrator | Monday 13 April 2026 01:01:39 +0000 (0:00:00.612) 0:00:58.403 ********** 2026-04-13 01:02:23.683264 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-13 01:02:23.683278 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-13 01:02:23.683289 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-13 01:02:23.683300 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-13 01:02:23.683311 | orchestrator | 2026-04-13 01:02:23.683322 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:02:23.683334 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:02:23.683346 | orchestrator | 2026-04-13 01:02:23.683364 | orchestrator | 2026-04-13 01:02:23.683402 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:02:23.683423 | orchestrator | Monday 13 April 2026 01:01:40 +0000 (0:00:01.581) 0:00:59.984 ********** 2026-04-13 01:02:23.683442 | orchestrator | =============================================================================== 2026-04-13 01:02:23.683461 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.94s 2026-04-13 01:02:23.683480 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.38s 2026-04-13 01:02:23.683500 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.06s 2026-04-13 01:02:23.683518 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.59s 2026-04-13 01:02:23.683547 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.58s 2026-04-13 01:02:23.683568 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.25s 2026-04-13 01:02:23.683588 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-04-13 01:02:23.683607 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-04-13 01:02:23.683627 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2026-04-13 01:02:23.683645 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.71s 2026-04-13 01:02:23.683663 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-04-13 01:02:23.683682 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2026-04-13 01:02:23.683700 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.31s 2026-04-13 01:02:23.683718 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-13 01:02:23.683738 | orchestrator | 2026-04-13 01:02:23.683758 | orchestrator | 2026-04-13 01:02:23.683777 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:02:23.683798 | orchestrator | 2026-04-13 01:02:23.683810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:02:23.683821 | orchestrator | Monday 13 April 2026 01:01:44 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-04-13 01:02:23.683832 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.683844 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:23.683855 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:23.683866 | orchestrator | 2026-04-13 01:02:23.683877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:02:23.683888 | orchestrator | Monday 13 April 2026 01:01:45 +0000 (0:00:00.363) 0:00:00.553 ********** 2026-04-13 01:02:23.683910 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-13 01:02:23.683922 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-13 01:02:23.683933 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-13 01:02:23.683944 | orchestrator | 2026-04-13 01:02:23.683955 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-13 01:02:23.683966 | orchestrator | 2026-04-13 01:02:23.683977 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-13 01:02:23.683988 | orchestrator | Monday 13 April 2026 01:01:45 +0000 (0:00:00.556) 0:00:01.110 ********** 2026-04-13 01:02:23.684000 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.684011 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:23.684022 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:23.684033 | orchestrator | 2026-04-13 01:02:23.684045 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:02:23.684056 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:23.684068 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:23.684079 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:23.684091 | orchestrator | 2026-04-13 01:02:23.684102 | orchestrator | 2026-04-13 01:02:23.684113 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:02:23.684124 | orchestrator | Monday 13 April 2026 01:01:46 +0000 (0:00:01.090) 0:00:02.201 ********** 2026-04-13 01:02:23.684151 | orchestrator | =============================================================================== 2026-04-13 01:02:23.684189 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.09s 2026-04-13 01:02:23.684200 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-04-13 01:02:23.684212 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-13 01:02:23.684223 | orchestrator | 2026-04-13 01:02:23.684234 | orchestrator | 2026-04-13 01:02:23.684245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:02:23.684259 | orchestrator | 2026-04-13 01:02:23.684280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:02:23.684299 | orchestrator | Monday 13 April 2026 00:59:25 +0000 (0:00:00.327) 0:00:00.327 ********** 2026-04-13 01:02:23.684320 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.684341 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:23.684358 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:23.684370 | orchestrator | 2026-04-13 01:02:23.684381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:02:23.684392 | orchestrator | Monday 13 April 2026 00:59:25 +0000 (0:00:00.301) 0:00:00.629 ********** 2026-04-13 01:02:23.684404 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-13 01:02:23.684415 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-13 01:02:23.684426 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-13 01:02:23.684437 | orchestrator | 2026-04-13 01:02:23.684448 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-13 01:02:23.684459 | orchestrator | 2026-04-13 01:02:23.684491 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 01:02:23.684504 | orchestrator | Monday 13 April 2026 00:59:26 +0000 (0:00:00.291) 0:00:00.920 ********** 2026-04-13 01:02:23.684515 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:02:23.684526 | orchestrator | 2026-04-13 01:02:23.684537 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-13 01:02:23.684549 | orchestrator | Monday 13 April 2026 00:59:26 +0000 (0:00:00.696) 0:00:01.617 ********** 2026-04-13 01:02:23.684580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.684599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.684613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.684626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.684655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.684674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.684686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.684698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.684710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.684721 | orchestrator | 2026-04-13 01:02:23.684733 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-13 01:02:23.684744 | orchestrator | Monday 13 April 2026 00:59:28 +0000 (0:00:01.969) 0:00:03.586 ********** 2026-04-13 01:02:23.684756 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.684767 | orchestrator | 2026-04-13 01:02:23.684778 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-13 01:02:23.684789 | orchestrator | Monday 13 April 2026 00:59:28 +0000 (0:00:00.118) 0:00:03.705 ********** 2026-04-13 01:02:23.684801 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.684812 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.684823 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.684835 | orchestrator | 2026-04-13 01:02:23.684846 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-13 01:02:23.684857 | orchestrator | Monday 13 April 2026 00:59:29 +0000 (0:00:00.287) 0:00:03.993 ********** 2026-04-13 01:02:23.684868 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:02:23.684879 | orchestrator | 2026-04-13 01:02:23.684891 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 01:02:23.684908 | orchestrator | Monday 13 April 2026 00:59:30 +0000 (0:00:01.033) 0:00:05.026 ********** 2026-04-13 01:02:23.684920 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:02:23.684931 | orchestrator | 2026-04-13 01:02:23.684942 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-13 01:02:23.684959 | orchestrator | Monday 13 April 2026 00:59:30 +0000 (0:00:00.719) 0:00:05.746 ********** 2026-04-13 01:02:23.684977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.684991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685106 | orchestrator | 2026-04-13 01:02:23.685118 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-13 01:02:23.685129 | orchestrator | Monday 13 April 2026 00:59:34 +0000 (0:00:03.192) 0:00:08.939 ********** 2026-04-13 01:02:23.685142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.685166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.685240 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.685253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.685266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.685297 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.685318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.685335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.685359 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.685371 | orchestrator | 2026-04-13 01:02:23.685382 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-13 01:02:23.685394 | orchestrator | Monday 13 April 2026 00:59:34 +0000 (0:00:00.574) 0:00:09.513 ********** 2026-04-13 01:02:23.685406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.685418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.685448 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.685474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.685496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.685535 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.685556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.685587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.685638 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.685658 | orchestrator | 2026-04-13 01:02:23.685678 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-13 01:02:23.685699 | orchestrator | Monday 13 April 2026 00:59:35 +0000 (0:00:00.936) 0:00:10.450 ********** 2026-04-13 01:02:23.685718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.685855 | orchestrator | 2026-04-13 01:02:23.685866 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-13 01:02:23.685878 | orchestrator | Monday 13 April 2026 00:59:38 +0000 (0:00:03.150) 0:00:13.600 ********** 2026-04-13 01:02:23.685896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.685970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.685989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.686006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.686065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.686078 | orchestrator | 2026-04-13 01:02:23.686090 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-13 01:02:23.686108 | orchestrator | Monday 13 April 2026 00:59:44 +0000 (0:00:05.609) 0:00:19.210 ********** 2026-04-13 01:02:23.686119 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.686131 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:02:23.686142 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:02:23.686153 | orchestrator | 2026-04-13 01:02:23.686165 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-13 01:02:23.686234 | orchestrator | Monday 13 April 2026 00:59:45 +0000 (0:00:01.395) 0:00:20.605 ********** 2026-04-13 01:02:23.686246 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.686258 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.686269 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.686280 | orchestrator | 2026-04-13 01:02:23.686292 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-13 01:02:23.686303 | orchestrator | Monday 13 April 2026 00:59:46 +0000 (0:00:00.966) 0:00:21.571 ********** 2026-04-13 01:02:23.686314 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.686325 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.686336 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.686347 | orchestrator | 2026-04-13 01:02:23.686359 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-13 01:02:23.686370 | orchestrator | Monday 13 April 2026 00:59:47 +0000 (0:00:00.296) 0:00:21.868 ********** 2026-04-13 01:02:23.686381 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.686392 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.686403 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.686415 | orchestrator | 2026-04-13 01:02:23.686426 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-13 01:02:23.686438 | orchestrator | Monday 13 April 2026 00:59:47 +0000 (0:00:00.297) 0:00:22.165 ********** 2026-04-13 01:02:23.686450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.686475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.686488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.686511 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.686523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.686536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.686548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.686560 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.686579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-13 01:02:23.686596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 01:02:23.686615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 01:02:23.686627 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.686638 | orchestrator | 2026-04-13 01:02:23.686650 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 01:02:23.686661 | orchestrator | Monday 13 April 2026 00:59:47 +0000 (0:00:00.533) 0:00:22.698 ********** 2026-04-13 01:02:23.686673 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.686684 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.686695 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.686706 | orchestrator | 2026-04-13 01:02:23.686716 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-13 01:02:23.686727 | orchestrator | Monday 13 April 2026 00:59:48 +0000 (0:00:00.569) 0:00:23.268 ********** 2026-04-13 01:02:23.686737 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-13 01:02:23.686747 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-13 01:02:23.686757 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-13 01:02:23.686767 | orchestrator | 2026-04-13 01:02:23.686777 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-13 01:02:23.686787 | orchestrator | Monday 13 April 2026 00:59:50 +0000 (0:00:01.588) 0:00:24.856 ********** 2026-04-13 01:02:23.686797 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:02:23.686807 | orchestrator | 2026-04-13 01:02:23.686817 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-13 01:02:23.686827 | orchestrator | Monday 13 April 2026 00:59:51 +0000 (0:00:01.112) 0:00:25.969 ********** 2026-04-13 01:02:23.686837 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.686847 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.686857 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.686868 | orchestrator | 2026-04-13 01:02:23.686878 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-13 01:02:23.686888 | orchestrator | Monday 13 April 2026 00:59:51 +0000 (0:00:00.540) 0:00:26.509 ********** 2026-04-13 01:02:23.686898 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 01:02:23.686908 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:02:23.686918 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 01:02:23.686928 | orchestrator | 2026-04-13 01:02:23.686938 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-13 01:02:23.686948 | orchestrator | Monday 13 April 2026 00:59:52 +0000 (0:00:01.249) 0:00:27.759 ********** 2026-04-13 01:02:23.686958 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.686968 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:23.686978 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:23.686988 | orchestrator | 2026-04-13 01:02:23.686998 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-13 01:02:23.687008 | orchestrator | Monday 13 April 2026 00:59:53 +0000 (0:00:00.529) 0:00:28.289 ********** 2026-04-13 01:02:23.687018 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-13 01:02:23.687034 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-13 01:02:23.687044 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-13 01:02:23.687054 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-13 01:02:23.687064 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-13 01:02:23.687079 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-13 01:02:23.687090 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-13 01:02:23.687100 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-13 01:02:23.687110 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-13 01:02:23.687124 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-13 01:02:23.687134 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-13 01:02:23.687144 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-13 01:02:23.687154 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-13 01:02:23.687165 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-13 01:02:23.687190 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-13 01:02:23.687201 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 01:02:23.687211 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 01:02:23.687221 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 01:02:23.687231 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 01:02:23.687242 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 01:02:23.687252 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 01:02:23.687262 | orchestrator | 2026-04-13 01:02:23.687272 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-13 01:02:23.687282 | orchestrator | Monday 13 April 2026 01:00:02 +0000 (0:00:08.813) 0:00:37.103 ********** 2026-04-13 01:02:23.687292 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 01:02:23.687302 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 01:02:23.687312 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 01:02:23.687322 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 01:02:23.687331 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 01:02:23.687342 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 01:02:23.687352 | orchestrator | 2026-04-13 01:02:23.687362 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-13 01:02:23.687371 | orchestrator | Monday 13 April 2026 01:00:05 +0000 (0:00:02.697) 0:00:39.801 ********** 2026-04-13 01:02:23.687383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.687406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.687422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-13 01:02:23.687434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.687444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.687460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 01:02:23.687471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.687487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.687502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 01:02:23.687512 | orchestrator | 2026-04-13 01:02:23.687523 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 01:02:23.687533 | orchestrator | Monday 13 April 2026 01:00:07 +0000 (0:00:02.273) 0:00:42.075 ********** 2026-04-13 01:02:23.687543 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.687553 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.687563 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.687573 | orchestrator | 2026-04-13 01:02:23.687584 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-13 01:02:23.687594 | orchestrator | Monday 13 April 2026 01:00:07 +0000 (0:00:00.495) 0:00:42.571 ********** 2026-04-13 01:02:23.687604 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.687614 | orchestrator | 2026-04-13 01:02:23.687624 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-13 01:02:23.687634 | orchestrator | Monday 13 April 2026 01:00:09 +0000 (0:00:02.202) 0:00:44.773 ********** 2026-04-13 01:02:23.687644 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.687654 | orchestrator | 2026-04-13 01:02:23.687664 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-13 01:02:23.687674 | orchestrator | Monday 13 April 2026 01:00:12 +0000 (0:00:02.335) 0:00:47.109 ********** 2026-04-13 01:02:23.687684 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.687694 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:23.687704 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:23.687719 | orchestrator | 2026-04-13 01:02:23.687730 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-13 01:02:23.687740 | orchestrator | Monday 13 April 2026 01:00:13 +0000 (0:00:00.847) 0:00:47.956 ********** 2026-04-13 01:02:23.687750 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.687760 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:23.687770 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:23.687780 | orchestrator | 2026-04-13 01:02:23.687790 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-13 01:02:23.687800 | orchestrator | Monday 13 April 2026 01:00:13 +0000 (0:00:00.363) 0:00:48.320 ********** 2026-04-13 01:02:23.687810 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.687820 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.687830 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.687840 | orchestrator | 2026-04-13 01:02:23.687850 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-13 01:02:23.687861 | orchestrator | Monday 13 April 2026 01:00:13 +0000 (0:00:00.325) 0:00:48.645 ********** 2026-04-13 01:02:23.687871 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.687881 | orchestrator | 2026-04-13 01:02:23.687891 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-13 01:02:23.687901 | orchestrator | Monday 13 April 2026 01:00:29 +0000 (0:00:15.380) 0:01:04.026 ********** 2026-04-13 01:02:23.687911 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.687921 | orchestrator | 2026-04-13 01:02:23.687931 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-13 01:02:23.687941 | orchestrator | Monday 13 April 2026 01:00:40 +0000 (0:00:10.980) 0:01:15.006 ********** 2026-04-13 01:02:23.687951 | orchestrator | 2026-04-13 01:02:23.687962 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-13 01:02:23.687972 | orchestrator | Monday 13 April 2026 01:00:40 +0000 (0:00:00.065) 0:01:15.071 ********** 2026-04-13 01:02:23.687982 | orchestrator | 2026-04-13 01:02:23.687992 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-13 01:02:23.688002 | orchestrator | Monday 13 April 2026 01:00:40 +0000 (0:00:00.063) 0:01:15.135 ********** 2026-04-13 01:02:23.688012 | orchestrator | 2026-04-13 01:02:23.688022 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-13 01:02:23.688032 | orchestrator | Monday 13 April 2026 01:00:40 +0000 (0:00:00.066) 0:01:15.201 ********** 2026-04-13 01:02:23.688042 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.688052 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:02:23.688062 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:02:23.688072 | orchestrator | 2026-04-13 01:02:23.688082 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-13 01:02:23.688092 | orchestrator | Monday 13 April 2026 01:01:10 +0000 (0:00:29.997) 0:01:45.199 ********** 2026-04-13 01:02:23.688102 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.688112 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:02:23.688122 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:02:23.688132 | orchestrator | 2026-04-13 01:02:23.688142 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-13 01:02:23.688152 | orchestrator | Monday 13 April 2026 01:01:21 +0000 (0:00:11.346) 0:01:56.545 ********** 2026-04-13 01:02:23.688167 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.688193 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:02:23.688203 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:02:23.688213 | orchestrator | 2026-04-13 01:02:23.688223 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 01:02:23.688234 | orchestrator | Monday 13 April 2026 01:01:33 +0000 (0:00:12.084) 0:02:08.629 ********** 2026-04-13 01:02:23.688244 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:02:23.688254 | orchestrator | 2026-04-13 01:02:23.688264 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-13 01:02:23.688283 | orchestrator | Monday 13 April 2026 01:01:34 +0000 (0:00:00.821) 0:02:09.450 ********** 2026-04-13 01:02:23.688293 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.688303 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:23.688314 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:23.688324 | orchestrator | 2026-04-13 01:02:23.688334 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-13 01:02:23.688344 | orchestrator | Monday 13 April 2026 01:01:35 +0000 (0:00:00.747) 0:02:10.197 ********** 2026-04-13 01:02:23.688354 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:23.688364 | orchestrator | 2026-04-13 01:02:23.688374 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-13 01:02:23.688384 | orchestrator | Monday 13 April 2026 01:01:37 +0000 (0:00:01.768) 0:02:11.966 ********** 2026-04-13 01:02:23.688394 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-13 01:02:23.688404 | orchestrator | 2026-04-13 01:02:23.688414 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-13 01:02:23.688425 | orchestrator | Monday 13 April 2026 01:01:49 +0000 (0:00:12.054) 0:02:24.020 ********** 2026-04-13 01:02:23.688435 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-13 01:02:23.688445 | orchestrator | 2026-04-13 01:02:23.688455 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-13 01:02:23.688465 | orchestrator | Monday 13 April 2026 01:02:06 +0000 (0:00:17.718) 0:02:41.738 ********** 2026-04-13 01:02:23.688475 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-13 01:02:23.688485 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-13 01:02:23.688495 | orchestrator | 2026-04-13 01:02:23.688505 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-13 01:02:23.688515 | orchestrator | Monday 13 April 2026 01:02:14 +0000 (0:00:07.907) 0:02:49.646 ********** 2026-04-13 01:02:23.688525 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.688535 | orchestrator | 2026-04-13 01:02:23.688545 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-13 01:02:23.688556 | orchestrator | Monday 13 April 2026 01:02:15 +0000 (0:00:00.277) 0:02:49.923 ********** 2026-04-13 01:02:23.688566 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.688576 | orchestrator | 2026-04-13 01:02:23.688586 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-13 01:02:23.688596 | orchestrator | Monday 13 April 2026 01:02:15 +0000 (0:00:00.232) 0:02:50.155 ********** 2026-04-13 01:02:23.688606 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.688616 | orchestrator | 2026-04-13 01:02:23.688626 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-13 01:02:23.688636 | orchestrator | Monday 13 April 2026 01:02:15 +0000 (0:00:00.263) 0:02:50.419 ********** 2026-04-13 01:02:23.688646 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.688656 | orchestrator | 2026-04-13 01:02:23.688666 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-13 01:02:23.688676 | orchestrator | Monday 13 April 2026 01:02:16 +0000 (0:00:00.734) 0:02:51.154 ********** 2026-04-13 01:02:23.688686 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:23.688697 | orchestrator | 2026-04-13 01:02:23.688707 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 01:02:23.688717 | orchestrator | Monday 13 April 2026 01:02:19 +0000 (0:00:03.288) 0:02:54.442 ********** 2026-04-13 01:02:23.688727 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:02:23.688737 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:02:23.688747 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:02:23.688757 | orchestrator | 2026-04-13 01:02:23.688767 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:02:23.688777 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-13 01:02:23.688793 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 01:02:23.688803 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 01:02:23.688813 | orchestrator | 2026-04-13 01:02:23.688824 | orchestrator | 2026-04-13 01:02:23.688834 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:02:23.688844 | orchestrator | Monday 13 April 2026 01:02:20 +0000 (0:00:00.798) 0:02:55.241 ********** 2026-04-13 01:02:23.688853 | orchestrator | =============================================================================== 2026-04-13 01:02:23.688864 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 30.00s 2026-04-13 01:02:23.688874 | orchestrator | service-ks-register : keystone | Creating services --------------------- 17.72s 2026-04-13 01:02:23.688884 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.38s 2026-04-13 01:02:23.688894 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.08s 2026-04-13 01:02:23.688909 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.05s 2026-04-13 01:02:23.688920 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 11.35s 2026-04-13 01:02:23.688930 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.98s 2026-04-13 01:02:23.688940 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.81s 2026-04-13 01:02:23.688950 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.91s 2026-04-13 01:02:23.688960 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.61s 2026-04-13 01:02:23.688974 | orchestrator | keystone : Creating default user role ----------------------------------- 3.29s 2026-04-13 01:02:23.688985 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.19s 2026-04-13 01:02:23.688995 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.15s 2026-04-13 01:02:23.689005 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.70s 2026-04-13 01:02:23.689015 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.34s 2026-04-13 01:02:23.689025 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.27s 2026-04-13 01:02:23.689035 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.20s 2026-04-13 01:02:23.689045 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.97s 2026-04-13 01:02:23.689055 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.77s 2026-04-13 01:02:23.689065 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.59s 2026-04-13 01:02:23.689075 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:23.689086 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:23.689096 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:23.689106 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:23.689117 | orchestrator | 2026-04-13 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:26.722973 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:26.725124 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:26.726905 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:26.728857 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:26.730806 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:26.730855 | orchestrator | 2026-04-13 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:29.773047 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:29.773125 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:29.773678 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:29.774516 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:29.775421 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:29.775461 | orchestrator | 2026-04-13 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:32.819429 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state STARTED 2026-04-13 01:02:32.823873 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:32.824773 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:32.828587 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:32.829411 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:32.829433 | orchestrator | 2026-04-13 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:35.860670 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task f28a71bb-f7b3-43f2-9613-b22f93d832ee is in state SUCCESS 2026-04-13 01:02:35.861416 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:35.863104 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:35.864352 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state STARTED 2026-04-13 01:02:35.866698 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:35.866748 | orchestrator | 2026-04-13 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:38.909800 | orchestrator | 2026-04-13 01:02:38.909925 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 01:02:38.909943 | orchestrator | 2.16.14 2026-04-13 01:02:38.909956 | orchestrator | 2026-04-13 01:02:38.909968 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-13 01:02:38.909980 | orchestrator | 2026-04-13 01:02:38.909991 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-13 01:02:38.910003 | orchestrator | Monday 13 April 2026 01:01:46 +0000 (0:00:00.258) 0:00:00.258 ********** 2026-04-13 01:02:38.910193 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910216 | orchestrator | 2026-04-13 01:02:38.910227 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-13 01:02:38.910239 | orchestrator | Monday 13 April 2026 01:01:47 +0000 (0:00:01.846) 0:00:02.105 ********** 2026-04-13 01:02:38.910295 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910310 | orchestrator | 2026-04-13 01:02:38.910326 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-13 01:02:38.910339 | orchestrator | Monday 13 April 2026 01:01:49 +0000 (0:00:01.221) 0:00:03.326 ********** 2026-04-13 01:02:38.910352 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910365 | orchestrator | 2026-04-13 01:02:38.910378 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-13 01:02:38.910403 | orchestrator | Monday 13 April 2026 01:01:50 +0000 (0:00:01.513) 0:00:04.840 ********** 2026-04-13 01:02:38.910415 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910426 | orchestrator | 2026-04-13 01:02:38.910438 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-13 01:02:38.910449 | orchestrator | Monday 13 April 2026 01:01:52 +0000 (0:00:01.486) 0:00:06.326 ********** 2026-04-13 01:02:38.910460 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910471 | orchestrator | 2026-04-13 01:02:38.910483 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-13 01:02:38.910494 | orchestrator | Monday 13 April 2026 01:01:53 +0000 (0:00:01.094) 0:00:07.421 ********** 2026-04-13 01:02:38.910505 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910516 | orchestrator | 2026-04-13 01:02:38.910528 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-13 01:02:38.910539 | orchestrator | Monday 13 April 2026 01:01:54 +0000 (0:00:01.166) 0:00:08.587 ********** 2026-04-13 01:02:38.910550 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910562 | orchestrator | 2026-04-13 01:02:38.910573 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-13 01:02:38.910584 | orchestrator | Monday 13 April 2026 01:01:56 +0000 (0:00:02.197) 0:00:10.785 ********** 2026-04-13 01:02:38.910595 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910606 | orchestrator | 2026-04-13 01:02:38.910618 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-13 01:02:38.910629 | orchestrator | Monday 13 April 2026 01:01:57 +0000 (0:00:01.313) 0:00:12.099 ********** 2026-04-13 01:02:38.910640 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:38.910651 | orchestrator | 2026-04-13 01:02:38.910663 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-13 01:02:38.910674 | orchestrator | Monday 13 April 2026 01:02:07 +0000 (0:00:09.771) 0:00:21.870 ********** 2026-04-13 01:02:38.910685 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:02:38.910696 | orchestrator | 2026-04-13 01:02:38.910708 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-13 01:02:38.910720 | orchestrator | 2026-04-13 01:02:38.910731 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-13 01:02:38.910742 | orchestrator | Monday 13 April 2026 01:02:07 +0000 (0:00:00.182) 0:00:22.053 ********** 2026-04-13 01:02:38.910754 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:02:38.910765 | orchestrator | 2026-04-13 01:02:38.910776 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-13 01:02:38.910788 | orchestrator | 2026-04-13 01:02:38.910799 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-13 01:02:38.910810 | orchestrator | Monday 13 April 2026 01:02:20 +0000 (0:00:12.171) 0:00:34.224 ********** 2026-04-13 01:02:38.910821 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:02:38.910832 | orchestrator | 2026-04-13 01:02:38.910844 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-13 01:02:38.910855 | orchestrator | 2026-04-13 01:02:38.910866 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-13 01:02:38.910878 | orchestrator | Monday 13 April 2026 01:02:31 +0000 (0:00:11.444) 0:00:45.668 ********** 2026-04-13 01:02:38.910889 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:02:38.910900 | orchestrator | 2026-04-13 01:02:38.910912 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:02:38.910933 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 01:02:38.910945 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.910957 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.910968 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.910979 | orchestrator | 2026-04-13 01:02:38.910990 | orchestrator | 2026-04-13 01:02:38.911002 | orchestrator | 2026-04-13 01:02:38.911013 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:02:38.911025 | orchestrator | Monday 13 April 2026 01:02:32 +0000 (0:00:01.376) 0:00:47.045 ********** 2026-04-13 01:02:38.911036 | orchestrator | =============================================================================== 2026-04-13 01:02:38.911047 | orchestrator | Restart ceph manager service ------------------------------------------- 24.99s 2026-04-13 01:02:38.911087 | orchestrator | Create admin user ------------------------------------------------------- 9.77s 2026-04-13 01:02:38.911100 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.20s 2026-04-13 01:02:38.911111 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.85s 2026-04-13 01:02:38.911122 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.51s 2026-04-13 01:02:38.911134 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.49s 2026-04-13 01:02:38.911145 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.31s 2026-04-13 01:02:38.911156 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.22s 2026-04-13 01:02:38.911186 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.17s 2026-04-13 01:02:38.911198 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.09s 2026-04-13 01:02:38.911209 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-04-13 01:02:38.911220 | orchestrator | 2026-04-13 01:02:38.911232 | orchestrator | 2026-04-13 01:02:38.911243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:02:38.911254 | orchestrator | 2026-04-13 01:02:38.911266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:02:38.911277 | orchestrator | Monday 13 April 2026 01:01:52 +0000 (0:00:00.337) 0:00:00.337 ********** 2026-04-13 01:02:38.911288 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:02:38.911300 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:02:38.911311 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:02:38.911322 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:02:38.911334 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:02:38.911345 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:02:38.911356 | orchestrator | ok: [testbed-manager] 2026-04-13 01:02:38.911367 | orchestrator | 2026-04-13 01:02:38.911379 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:02:38.911390 | orchestrator | Monday 13 April 2026 01:01:53 +0000 (0:00:00.860) 0:00:01.197 ********** 2026-04-13 01:02:38.911402 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-13 01:02:38.911414 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-13 01:02:38.911425 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-13 01:02:38.911436 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-13 01:02:38.911447 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-13 01:02:38.911458 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-13 01:02:38.911469 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-13 01:02:38.911489 | orchestrator | 2026-04-13 01:02:38.911500 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-13 01:02:38.911511 | orchestrator | 2026-04-13 01:02:38.911523 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-13 01:02:38.911534 | orchestrator | Monday 13 April 2026 01:01:54 +0000 (0:00:00.982) 0:00:02.180 ********** 2026-04-13 01:02:38.911546 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-13 01:02:38.911558 | orchestrator | 2026-04-13 01:02:38.911569 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-13 01:02:38.911581 | orchestrator | Monday 13 April 2026 01:01:56 +0000 (0:00:01.979) 0:00:04.159 ********** 2026-04-13 01:02:38.911592 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-13 01:02:38.911603 | orchestrator | 2026-04-13 01:02:38.911614 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-13 01:02:38.911626 | orchestrator | Monday 13 April 2026 01:02:09 +0000 (0:00:12.943) 0:00:17.103 ********** 2026-04-13 01:02:38.911637 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-13 01:02:38.911649 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-13 01:02:38.911680 | orchestrator | 2026-04-13 01:02:38.911713 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-13 01:02:38.911733 | orchestrator | Monday 13 April 2026 01:02:17 +0000 (0:00:08.108) 0:00:25.211 ********** 2026-04-13 01:02:38.911751 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:02:38.911770 | orchestrator | 2026-04-13 01:02:38.911783 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-13 01:02:38.911795 | orchestrator | Monday 13 April 2026 01:02:20 +0000 (0:00:03.014) 0:00:28.225 ********** 2026-04-13 01:02:38.911806 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-13 01:02:38.911817 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:02:38.911828 | orchestrator | 2026-04-13 01:02:38.911840 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-13 01:02:38.911851 | orchestrator | Monday 13 April 2026 01:02:24 +0000 (0:00:03.671) 0:00:31.897 ********** 2026-04-13 01:02:38.911862 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:02:38.911873 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-13 01:02:38.911884 | orchestrator | 2026-04-13 01:02:38.911897 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-13 01:02:38.911915 | orchestrator | Monday 13 April 2026 01:02:29 +0000 (0:00:05.638) 0:00:37.536 ********** 2026-04-13 01:02:38.911932 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-13 01:02:38.911949 | orchestrator | 2026-04-13 01:02:38.911969 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:02:38.912007 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.912021 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.912032 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.912044 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.912055 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.912066 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.912084 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:38.912096 | orchestrator | 2026-04-13 01:02:38.912107 | orchestrator | 2026-04-13 01:02:38.912118 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:02:38.912130 | orchestrator | Monday 13 April 2026 01:02:35 +0000 (0:00:05.820) 0:00:43.357 ********** 2026-04-13 01:02:38.912145 | orchestrator | =============================================================================== 2026-04-13 01:02:38.912206 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 12.94s 2026-04-13 01:02:38.912227 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.11s 2026-04-13 01:02:38.912245 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.82s 2026-04-13 01:02:38.912262 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.64s 2026-04-13 01:02:38.912280 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.67s 2026-04-13 01:02:38.912298 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.01s 2026-04-13 01:02:38.912317 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.98s 2026-04-13 01:02:38.912336 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2026-04-13 01:02:38.912357 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-04-13 01:02:38.912370 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:38.912381 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:02:38.912392 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:38.912404 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task 65e8dd2f-2a80-4b18-a9be-799d59ae93b5 is in state SUCCESS 2026-04-13 01:02:38.912540 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:38.912555 | orchestrator | 2026-04-13 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:41.949650 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:41.950053 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:02:41.950969 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:41.951956 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:41.951969 | orchestrator | 2026-04-13 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:45.006851 | orchestrator | 2026-04-13 01:02:45 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:45.007288 | orchestrator | 2026-04-13 01:02:45 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:02:45.008275 | orchestrator | 2026-04-13 01:02:45 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:45.008998 | orchestrator | 2026-04-13 01:02:45 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:45.009026 | orchestrator | 2026-04-13 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:48.040875 | orchestrator | 2026-04-13 01:02:48 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:48.042991 | orchestrator | 2026-04-13 01:02:48 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:02:48.045463 | orchestrator | 2026-04-13 01:02:48 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:48.048044 | orchestrator | 2026-04-13 01:02:48 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:48.048251 | orchestrator | 2026-04-13 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:51.082651 | orchestrator | 2026-04-13 01:02:51 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:51.083293 | orchestrator | 2026-04-13 01:02:51 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:02:51.084269 | orchestrator | 2026-04-13 01:02:51 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:51.085133 | orchestrator | 2026-04-13 01:02:51 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:51.085228 | orchestrator | 2026-04-13 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:54.129436 | orchestrator | 2026-04-13 01:02:54 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:54.129568 | orchestrator | 2026-04-13 01:02:54 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:02:54.130197 | orchestrator | 2026-04-13 01:02:54 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:54.131049 | orchestrator | 2026-04-13 01:02:54 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:54.131123 | orchestrator | 2026-04-13 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:57.161323 | orchestrator | 2026-04-13 01:02:57 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:02:57.161710 | orchestrator | 2026-04-13 01:02:57 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:02:57.162712 | orchestrator | 2026-04-13 01:02:57 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:02:57.165025 | orchestrator | 2026-04-13 01:02:57 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:02:57.165113 | orchestrator | 2026-04-13 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:00.197101 | orchestrator | 2026-04-13 01:03:00 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:00.198462 | orchestrator | 2026-04-13 01:03:00 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:00.200058 | orchestrator | 2026-04-13 01:03:00 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:00.200966 | orchestrator | 2026-04-13 01:03:00 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:00.201121 | orchestrator | 2026-04-13 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:03.227827 | orchestrator | 2026-04-13 01:03:03 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:03.228431 | orchestrator | 2026-04-13 01:03:03 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:03.229320 | orchestrator | 2026-04-13 01:03:03 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:03.230284 | orchestrator | 2026-04-13 01:03:03 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:03.230353 | orchestrator | 2026-04-13 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:06.266669 | orchestrator | 2026-04-13 01:03:06 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:06.267217 | orchestrator | 2026-04-13 01:03:06 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:06.268175 | orchestrator | 2026-04-13 01:03:06 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:06.269285 | orchestrator | 2026-04-13 01:03:06 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:06.269339 | orchestrator | 2026-04-13 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:09.312893 | orchestrator | 2026-04-13 01:03:09 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:09.313808 | orchestrator | 2026-04-13 01:03:09 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:09.314339 | orchestrator | 2026-04-13 01:03:09 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:09.315471 | orchestrator | 2026-04-13 01:03:09 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:09.315539 | orchestrator | 2026-04-13 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:12.365034 | orchestrator | 2026-04-13 01:03:12 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:12.365332 | orchestrator | 2026-04-13 01:03:12 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:12.366161 | orchestrator | 2026-04-13 01:03:12 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:12.367236 | orchestrator | 2026-04-13 01:03:12 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:12.368592 | orchestrator | 2026-04-13 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:15.391459 | orchestrator | 2026-04-13 01:03:15 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:15.392094 | orchestrator | 2026-04-13 01:03:15 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:15.392995 | orchestrator | 2026-04-13 01:03:15 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:15.393634 | orchestrator | 2026-04-13 01:03:15 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:15.393755 | orchestrator | 2026-04-13 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:18.414308 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:18.414777 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:18.416103 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:18.417166 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:18.417193 | orchestrator | 2026-04-13 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:21.444672 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:21.445093 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:21.445669 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:21.446654 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:21.447318 | orchestrator | 2026-04-13 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:24.518771 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:24.518856 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:24.518871 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:24.518883 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:24.518894 | orchestrator | 2026-04-13 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:27.507666 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:27.507727 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:27.508476 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:27.508897 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:27.508946 | orchestrator | 2026-04-13 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:30.537594 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:30.537868 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:30.538706 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:30.540419 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:30.540464 | orchestrator | 2026-04-13 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:33.631041 | orchestrator | 2026-04-13 01:03:33 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:33.649091 | orchestrator | 2026-04-13 01:03:33 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:33.651878 | orchestrator | 2026-04-13 01:03:33 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:33.657931 | orchestrator | 2026-04-13 01:03:33 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:33.658675 | orchestrator | 2026-04-13 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:36.733352 | orchestrator | 2026-04-13 01:03:36 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:36.733824 | orchestrator | 2026-04-13 01:03:36 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:36.734771 | orchestrator | 2026-04-13 01:03:36 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:36.735580 | orchestrator | 2026-04-13 01:03:36 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:36.735609 | orchestrator | 2026-04-13 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:39.759305 | orchestrator | 2026-04-13 01:03:39 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:39.761248 | orchestrator | 2026-04-13 01:03:39 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:39.761823 | orchestrator | 2026-04-13 01:03:39 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:39.762478 | orchestrator | 2026-04-13 01:03:39 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:39.762517 | orchestrator | 2026-04-13 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:42.797567 | orchestrator | 2026-04-13 01:03:42 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:42.797962 | orchestrator | 2026-04-13 01:03:42 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:42.798910 | orchestrator | 2026-04-13 01:03:42 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:42.799952 | orchestrator | 2026-04-13 01:03:42 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:42.799990 | orchestrator | 2026-04-13 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:45.979873 | orchestrator | 2026-04-13 01:03:45 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:45.980453 | orchestrator | 2026-04-13 01:03:45 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:45.981094 | orchestrator | 2026-04-13 01:03:45 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:45.982152 | orchestrator | 2026-04-13 01:03:45 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:45.982199 | orchestrator | 2026-04-13 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:49.015751 | orchestrator | 2026-04-13 01:03:49 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:49.015951 | orchestrator | 2026-04-13 01:03:49 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:49.016786 | orchestrator | 2026-04-13 01:03:49 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:49.017517 | orchestrator | 2026-04-13 01:03:49 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:49.017553 | orchestrator | 2026-04-13 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:52.064439 | orchestrator | 2026-04-13 01:03:52 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:52.065308 | orchestrator | 2026-04-13 01:03:52 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:52.065903 | orchestrator | 2026-04-13 01:03:52 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:52.066986 | orchestrator | 2026-04-13 01:03:52 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:52.067026 | orchestrator | 2026-04-13 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:55.106393 | orchestrator | 2026-04-13 01:03:55 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:55.106510 | orchestrator | 2026-04-13 01:03:55 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:55.109654 | orchestrator | 2026-04-13 01:03:55 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:55.112698 | orchestrator | 2026-04-13 01:03:55 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:55.112751 | orchestrator | 2026-04-13 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:58.150588 | orchestrator | 2026-04-13 01:03:58 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:03:58.152244 | orchestrator | 2026-04-13 01:03:58 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:03:58.154147 | orchestrator | 2026-04-13 01:03:58 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:03:58.156591 | orchestrator | 2026-04-13 01:03:58 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:03:58.156738 | orchestrator | 2026-04-13 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:01.207945 | orchestrator | 2026-04-13 01:04:01 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:01.208242 | orchestrator | 2026-04-13 01:04:01 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:01.209329 | orchestrator | 2026-04-13 01:04:01 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:01.209811 | orchestrator | 2026-04-13 01:04:01 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:01.209846 | orchestrator | 2026-04-13 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:04.242605 | orchestrator | 2026-04-13 01:04:04 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:04.244571 | orchestrator | 2026-04-13 01:04:04 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:04.246239 | orchestrator | 2026-04-13 01:04:04 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:04.248276 | orchestrator | 2026-04-13 01:04:04 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:04.248313 | orchestrator | 2026-04-13 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:07.299178 | orchestrator | 2026-04-13 01:04:07 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:07.299461 | orchestrator | 2026-04-13 01:04:07 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:07.300572 | orchestrator | 2026-04-13 01:04:07 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:07.302312 | orchestrator | 2026-04-13 01:04:07 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:07.302364 | orchestrator | 2026-04-13 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:10.355634 | orchestrator | 2026-04-13 01:04:10 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:10.361978 | orchestrator | 2026-04-13 01:04:10 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:10.365393 | orchestrator | 2026-04-13 01:04:10 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:10.371210 | orchestrator | 2026-04-13 01:04:10 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:10.371257 | orchestrator | 2026-04-13 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:13.420071 | orchestrator | 2026-04-13 01:04:13 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:13.422383 | orchestrator | 2026-04-13 01:04:13 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:13.424818 | orchestrator | 2026-04-13 01:04:13 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:13.426794 | orchestrator | 2026-04-13 01:04:13 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:13.426870 | orchestrator | 2026-04-13 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:16.469568 | orchestrator | 2026-04-13 01:04:16 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:16.470523 | orchestrator | 2026-04-13 01:04:16 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:16.472515 | orchestrator | 2026-04-13 01:04:16 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:16.474207 | orchestrator | 2026-04-13 01:04:16 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:16.474234 | orchestrator | 2026-04-13 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:19.517834 | orchestrator | 2026-04-13 01:04:19 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:19.518748 | orchestrator | 2026-04-13 01:04:19 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:19.520725 | orchestrator | 2026-04-13 01:04:19 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:19.521634 | orchestrator | 2026-04-13 01:04:19 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:19.521677 | orchestrator | 2026-04-13 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:22.558161 | orchestrator | 2026-04-13 01:04:22 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:22.558923 | orchestrator | 2026-04-13 01:04:22 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:22.559556 | orchestrator | 2026-04-13 01:04:22 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:22.560505 | orchestrator | 2026-04-13 01:04:22 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:22.561201 | orchestrator | 2026-04-13 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:25.593300 | orchestrator | 2026-04-13 01:04:25 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:25.594991 | orchestrator | 2026-04-13 01:04:25 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:25.595841 | orchestrator | 2026-04-13 01:04:25 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:25.596844 | orchestrator | 2026-04-13 01:04:25 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:25.597069 | orchestrator | 2026-04-13 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:28.711791 | orchestrator | 2026-04-13 01:04:28 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:28.716657 | orchestrator | 2026-04-13 01:04:28 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:28.718712 | orchestrator | 2026-04-13 01:04:28 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:28.721889 | orchestrator | 2026-04-13 01:04:28 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:28.723278 | orchestrator | 2026-04-13 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:31.768071 | orchestrator | 2026-04-13 01:04:31 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:31.769375 | orchestrator | 2026-04-13 01:04:31 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:31.770775 | orchestrator | 2026-04-13 01:04:31 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:31.772435 | orchestrator | 2026-04-13 01:04:31 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:31.772477 | orchestrator | 2026-04-13 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:34.813155 | orchestrator | 2026-04-13 01:04:34 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:34.814991 | orchestrator | 2026-04-13 01:04:34 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:34.818986 | orchestrator | 2026-04-13 01:04:34 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:34.821472 | orchestrator | 2026-04-13 01:04:34 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:34.821562 | orchestrator | 2026-04-13 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:37.857899 | orchestrator | 2026-04-13 01:04:37 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:37.859128 | orchestrator | 2026-04-13 01:04:37 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:37.861278 | orchestrator | 2026-04-13 01:04:37 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:37.863865 | orchestrator | 2026-04-13 01:04:37 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:37.863948 | orchestrator | 2026-04-13 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:40.900741 | orchestrator | 2026-04-13 01:04:40 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:40.903535 | orchestrator | 2026-04-13 01:04:40 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:40.903571 | orchestrator | 2026-04-13 01:04:40 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:40.905353 | orchestrator | 2026-04-13 01:04:40 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:40.905516 | orchestrator | 2026-04-13 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:44.087822 | orchestrator | 2026-04-13 01:04:44 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:44.089436 | orchestrator | 2026-04-13 01:04:44 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:44.094354 | orchestrator | 2026-04-13 01:04:44 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:44.097450 | orchestrator | 2026-04-13 01:04:44 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:44.097568 | orchestrator | 2026-04-13 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:47.192152 | orchestrator | 2026-04-13 01:04:47 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:47.192674 | orchestrator | 2026-04-13 01:04:47 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:47.193386 | orchestrator | 2026-04-13 01:04:47 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:47.193857 | orchestrator | 2026-04-13 01:04:47 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:47.193897 | orchestrator | 2026-04-13 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:50.240116 | orchestrator | 2026-04-13 01:04:50 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:50.243417 | orchestrator | 2026-04-13 01:04:50 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:50.246535 | orchestrator | 2026-04-13 01:04:50 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:50.246631 | orchestrator | 2026-04-13 01:04:50 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:50.246653 | orchestrator | 2026-04-13 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:53.286654 | orchestrator | 2026-04-13 01:04:53 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:53.288978 | orchestrator | 2026-04-13 01:04:53 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:53.291888 | orchestrator | 2026-04-13 01:04:53 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:53.295187 | orchestrator | 2026-04-13 01:04:53 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:53.295259 | orchestrator | 2026-04-13 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:56.335140 | orchestrator | 2026-04-13 01:04:56 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:56.336928 | orchestrator | 2026-04-13 01:04:56 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:56.339330 | orchestrator | 2026-04-13 01:04:56 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:56.341062 | orchestrator | 2026-04-13 01:04:56 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:56.341111 | orchestrator | 2026-04-13 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:59.387836 | orchestrator | 2026-04-13 01:04:59 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:04:59.389578 | orchestrator | 2026-04-13 01:04:59 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:04:59.389652 | orchestrator | 2026-04-13 01:04:59 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:04:59.390652 | orchestrator | 2026-04-13 01:04:59 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:04:59.390684 | orchestrator | 2026-04-13 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:02.481142 | orchestrator | 2026-04-13 01:05:02 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:02.483954 | orchestrator | 2026-04-13 01:05:02 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:02.486536 | orchestrator | 2026-04-13 01:05:02 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:05:02.490188 | orchestrator | 2026-04-13 01:05:02 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:05:02.490257 | orchestrator | 2026-04-13 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:05.545130 | orchestrator | 2026-04-13 01:05:05 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:05.547315 | orchestrator | 2026-04-13 01:05:05 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:05.551617 | orchestrator | 2026-04-13 01:05:05 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:05:05.553222 | orchestrator | 2026-04-13 01:05:05 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:05:05.553276 | orchestrator | 2026-04-13 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:08.592358 | orchestrator | 2026-04-13 01:05:08 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:08.594136 | orchestrator | 2026-04-13 01:05:08 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:08.595497 | orchestrator | 2026-04-13 01:05:08 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:05:08.596676 | orchestrator | 2026-04-13 01:05:08 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state STARTED 2026-04-13 01:05:08.596947 | orchestrator | 2026-04-13 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:11.635660 | orchestrator | 2026-04-13 01:05:11 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:11.636781 | orchestrator | 2026-04-13 01:05:11 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:11.639002 | orchestrator | 2026-04-13 01:05:11 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state STARTED 2026-04-13 01:05:11.644844 | orchestrator | 2026-04-13 01:05:11 | INFO  | Task 14022211-33c2-4baf-882c-344db96c971f is in state SUCCESS 2026-04-13 01:05:11.646171 | orchestrator | 2026-04-13 01:05:11.646219 | orchestrator | 2026-04-13 01:05:11.646232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:05:11.646245 | orchestrator | 2026-04-13 01:05:11.646257 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:05:11.646268 | orchestrator | Monday 13 April 2026 01:01:44 +0000 (0:00:00.358) 0:00:00.358 ********** 2026-04-13 01:05:11.646280 | orchestrator | ok: [testbed-manager] 2026-04-13 01:05:11.646292 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:05:11.646303 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:05:11.646314 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:05:11.646326 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:05:11.646337 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:05:11.646348 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:05:11.646359 | orchestrator | 2026-04-13 01:05:11.646372 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:05:11.646383 | orchestrator | Monday 13 April 2026 01:01:45 +0000 (0:00:00.753) 0:00:01.112 ********** 2026-04-13 01:05:11.646395 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-13 01:05:11.646414 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-13 01:05:11.646433 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-13 01:05:11.646451 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-13 01:05:11.646468 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-13 01:05:11.646487 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-13 01:05:11.646506 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-13 01:05:11.646525 | orchestrator | 2026-04-13 01:05:11.646545 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-13 01:05:11.646564 | orchestrator | 2026-04-13 01:05:11.646582 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-13 01:05:11.646601 | orchestrator | Monday 13 April 2026 01:01:46 +0000 (0:00:01.019) 0:00:02.132 ********** 2026-04-13 01:05:11.646638 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:05:11.646660 | orchestrator | 2026-04-13 01:05:11.646680 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-13 01:05:11.646727 | orchestrator | Monday 13 April 2026 01:01:48 +0000 (0:00:01.466) 0:00:03.599 ********** 2026-04-13 01:05:11.646753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.646780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.646804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-13 01:05:11.646825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.646866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.646888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.646908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.646951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.646973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.646995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647039 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-13 01:05:11.647179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647258 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647357 | orchestrator | 2026-04-13 01:05:11.647368 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-13 01:05:11.647380 | orchestrator | Monday 13 April 2026 01:01:53 +0000 (0:00:05.033) 0:00:08.632 ********** 2026-04-13 01:05:11.647392 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:05:11.647403 | orchestrator | 2026-04-13 01:05:11.647414 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-13 01:05:11.647426 | orchestrator | Monday 13 April 2026 01:01:54 +0000 (0:00:01.670) 0:00:10.303 ********** 2026-04-13 01:05:11.647442 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-13 01:05:11.647455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647590 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.647610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.647874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.647908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-13 01:05:11.647992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.648016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.648029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.648041 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.648098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.648861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.648886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.648898 | orchestrator | 2026-04-13 01:05:11.648910 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-13 01:05:11.648922 | orchestrator | Monday 13 April 2026 01:02:00 +0000 (0:00:05.997) 0:00:16.300 ********** 2026-04-13 01:05:11.648941 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-13 01:05:11.648954 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.648966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.648979 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-13 01:05:11.649009 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649022 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.649042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649216 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.649229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649307 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.649323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649389 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.649408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649443 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.649460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649509 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.649523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649587 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.649601 | orchestrator | 2026-04-13 01:05:11.649620 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-13 01:05:11.649632 | orchestrator | Monday 13 April 2026 01:02:02 +0000 (0:00:01.571) 0:00:17.872 ********** 2026-04-13 01:05:11.649644 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-13 01:05:11.649662 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649674 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-13 01:05:11.649706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649788 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.649799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.649914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:05:11.649925 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.649935 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.649956 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.649981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.649992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.650003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.650080 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.650101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.650112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.650130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.650140 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.650150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:05:11.650161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.650178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:05:11.650190 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.650200 | orchestrator | 2026-04-13 01:05:11.650210 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-13 01:05:11.650221 | orchestrator | Monday 13 April 2026 01:02:04 +0000 (0:00:02.081) 0:00:19.953 ********** 2026-04-13 01:05:11.650231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.650242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.650262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-13 01:05:11.650273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.650284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.650295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.650310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.650321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650347 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.650363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650421 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650483 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650523 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-13 01:05:11.650535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.650566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650576 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.650608 | orchestrator | 2026-04-13 01:05:11.650618 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-13 01:05:11.650628 | orchestrator | Monday 13 April 2026 01:02:10 +0000 (0:00:06.325) 0:00:26.279 ********** 2026-04-13 01:05:11.650639 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:05:11.650649 | orchestrator | 2026-04-13 01:05:11.650659 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-13 01:05:11.650683 | orchestrator | Monday 13 April 2026 01:02:11 +0000 (0:00:01.001) 0:00:27.280 ********** 2026-04-13 01:05:11.650698 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314820, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5065985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650730 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314820, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5065985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650752 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1314838, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5112169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650763 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314820, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5065985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650774 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314820, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5065985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650785 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1314811, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5052493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650802 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1314838, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5112169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650813 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314820, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5065985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650829 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314820, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5065985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.650847 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1314838, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5112169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650858 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1314838, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5112169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650868 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314832, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.510091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650879 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1314811, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5052493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650894 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1314811, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5052493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650911 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314807, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5030544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650921 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1314811, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5052493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650935 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314820, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5065985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650946 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314822, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5073724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650957 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1314838, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5112169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650967 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314832, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.510091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650983 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314832, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.510091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.650999 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314832, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.510091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651010 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1314811, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5052493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651024 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1314838, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5112169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651034 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314807, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5030544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651045 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1314830, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5094638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651074 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314807, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5030544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651094 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1314838, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5112169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.651131 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314832, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.510091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314807, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5030544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651160 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1314811, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5052493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651171 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314824, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651181 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314807, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5030544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651192 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314822, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5073724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651208 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314822, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5073724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651224 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314822, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5073724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651235 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314822, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5073724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651250 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314832, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.510091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651261 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1314830, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5094638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651271 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314817, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5057201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651282 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1314830, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5094638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651298 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314824, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651314 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1314830, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5094638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651325 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314824, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651339 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314817, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5057201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651350 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1314830, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5094638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651360 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314817, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5057201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651371 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314837, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5110016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651387 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314824, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651402 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314807, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5030544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651413 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314837, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5110016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651427 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314802, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.501606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651438 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314837, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5110016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651449 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314824, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651459 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314802, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.501606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651475 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314817, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5057201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651491 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314852, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5141776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651502 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314822, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5073724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651517 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1314811, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5052493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.651527 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314852, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5141776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651538 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314802, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.501606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651557 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314836, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5107732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651568 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314837, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5110016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651584 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314836, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5107732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651595 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314817, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5057201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651610 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1314830, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5094638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651621 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314810, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5039244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651632 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314852, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5141776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651648 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314824, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651659 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314802, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.501606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651675 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314837, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5110016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314810, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5039244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651700 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314836, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5107732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314832, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.510091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.651721 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1314805, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5025225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651737 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1314805, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5025225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651747 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314852, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5141776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651764 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314817, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5057201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651775 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314810, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5039244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651790 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314836, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5107732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651801 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314802, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.501606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651816 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314827, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5090466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651827 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314810, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5039244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651837 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314827, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5090466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651855 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314837, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5110016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314852, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5141776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651880 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1314805, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5025225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651891 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314825, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.507895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651908 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1314805, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5025225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651918 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314802, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.501606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651929 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314807, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5030544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.651944 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314825, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.507895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651955 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314827, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5090466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651965 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314852, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5141776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651980 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314836, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5107732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.651996 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314825, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.507895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652006 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314850, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.513612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652017 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.652027 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314836, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5107732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652043 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314827, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5090466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652054 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314850, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.513612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652120 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.652135 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314810, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5039244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652152 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314850, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.513612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652163 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.652173 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314825, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.507895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652184 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314810, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5039244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652195 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1314805, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5025225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652211 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 02026-04-13 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:11.652280 | orchestrator | , 'gid': 0, 'size': 3539, 'inode': 1314850, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.513612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652292 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.652303 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1314805, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5025225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652318 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314827, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5090466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652337 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314827, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5090466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652347 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314825, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.507895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652358 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314822, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5073724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652369 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314825, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.507895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652384 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314850, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.513612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652395 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.652406 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314850, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.513612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-13 01:05:11.652425 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.652440 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1314830, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5094638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652451 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314824, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652461 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314817, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5057201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652472 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314837, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5110016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652483 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314802, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.501606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314852, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5141776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652510 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314836, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5107732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652530 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314810, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5039244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1314805, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5025225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652552 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314827, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.5090466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652562 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314825, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.507895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652573 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314850, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.513612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 01:05:11.652583 | orchestrator | 2026-04-13 01:05:11.652594 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-13 01:05:11.652604 | orchestrator | Monday 13 April 2026 01:02:39 +0000 (0:00:27.836) 0:00:55.116 ********** 2026-04-13 01:05:11.652615 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:05:11.652626 | orchestrator | 2026-04-13 01:05:11.652640 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-13 01:05:11.652650 | orchestrator | Monday 13 April 2026 01:02:40 +0000 (0:00:00.847) 0:00:55.964 ********** 2026-04-13 01:05:11.652658 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.652667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652681 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-13 01:05:11.652689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652698 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-13 01:05:11.652706 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:05:11.652714 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.652723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652731 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-13 01:05:11.652739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652747 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-13 01:05:11.652756 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 01:05:11.652764 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.652772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652781 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-13 01:05:11.652789 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652797 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-13 01:05:11.652805 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 01:05:11.652814 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.652822 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652833 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-13 01:05:11.652842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652850 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-13 01:05:11.652858 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 01:05:11.652866 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.652875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652883 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-13 01:05:11.652891 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652900 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-13 01:05:11.652908 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:05:11.652916 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.652925 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652933 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-13 01:05:11.652942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652950 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-13 01:05:11.652958 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 01:05:11.652966 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.652975 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652983 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-13 01:05:11.652991 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:05:11.652999 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-13 01:05:11.653008 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 01:05:11.653016 | orchestrator | 2026-04-13 01:05:11.653024 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-13 01:05:11.653033 | orchestrator | Monday 13 April 2026 01:02:43 +0000 (0:00:02.543) 0:00:58.507 ********** 2026-04-13 01:05:11.653041 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:05:11.653049 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.653076 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:05:11.653091 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.653099 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:05:11.653107 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.653116 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:05:11.653124 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.653133 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:05:11.653141 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.653149 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:05:11.653158 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.653166 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-13 01:05:11.653174 | orchestrator | 2026-04-13 01:05:11.653183 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-13 01:05:11.653191 | orchestrator | Monday 13 April 2026 01:03:04 +0000 (0:00:21.238) 0:01:19.746 ********** 2026-04-13 01:05:11.653199 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:05:11.653212 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.653221 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:05:11.653229 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.653238 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:05:11.653246 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.653255 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:05:11.653263 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.653271 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:05:11.653279 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.653288 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:05:11.653296 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.653304 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-13 01:05:11.653313 | orchestrator | 2026-04-13 01:05:11.653321 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-13 01:05:11.653329 | orchestrator | Monday 13 April 2026 01:03:08 +0000 (0:00:04.421) 0:01:24.167 ********** 2026-04-13 01:05:11.653338 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:05:11.653346 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:05:11.653354 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.653363 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.653374 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:05:11.653383 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.653392 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-13 01:05:11.653400 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:05:11.653409 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.653417 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:05:11.653430 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.653439 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:05:11.653447 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.653455 | orchestrator | 2026-04-13 01:05:11.653464 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-13 01:05:11.653472 | orchestrator | Monday 13 April 2026 01:03:11 +0000 (0:00:02.860) 0:01:27.027 ********** 2026-04-13 01:05:11.653480 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:05:11.653489 | orchestrator | 2026-04-13 01:05:11.653497 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-13 01:05:11.653505 | orchestrator | Monday 13 April 2026 01:03:12 +0000 (0:00:00.948) 0:01:27.976 ********** 2026-04-13 01:05:11.653514 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.653522 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.653530 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.653538 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.653547 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.653555 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.653563 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.653572 | orchestrator | 2026-04-13 01:05:11.653580 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-13 01:05:11.653588 | orchestrator | Monday 13 April 2026 01:03:13 +0000 (0:00:00.842) 0:01:28.818 ********** 2026-04-13 01:05:11.653597 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.653605 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.653613 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.653621 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:11.653630 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.653638 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:11.653646 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:11.653654 | orchestrator | 2026-04-13 01:05:11.653663 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-13 01:05:11.653671 | orchestrator | Monday 13 April 2026 01:03:16 +0000 (0:00:02.745) 0:01:31.564 ********** 2026-04-13 01:05:11.653680 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:05:11.653688 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.653696 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:05:11.653704 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.653713 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:05:11.653721 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.653729 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:05:11.653738 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.653750 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:05:11.653759 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.653767 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:05:11.653775 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.653784 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:05:11.653792 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.653801 | orchestrator | 2026-04-13 01:05:11.653809 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-13 01:05:11.653817 | orchestrator | Monday 13 April 2026 01:03:17 +0000 (0:00:01.849) 0:01:33.413 ********** 2026-04-13 01:05:11.653831 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:05:11.653839 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.653848 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:05:11.653856 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.653864 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:05:11.653873 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.653881 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:05:11.653889 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.653897 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-13 01:05:11.653906 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:05:11.653917 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.653926 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:05:11.653934 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.653942 | orchestrator | 2026-04-13 01:05:11.653951 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-13 01:05:11.653959 | orchestrator | Monday 13 April 2026 01:03:19 +0000 (0:00:01.916) 0:01:35.330 ********** 2026-04-13 01:05:11.653967 | orchestrator | [WARNING]: Skipped 2026-04-13 01:05:11.653976 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-13 01:05:11.653984 | orchestrator | due to this access issue: 2026-04-13 01:05:11.653992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-13 01:05:11.654000 | orchestrator | not a directory 2026-04-13 01:05:11.654009 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:05:11.654041 | orchestrator | 2026-04-13 01:05:11.654050 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-13 01:05:11.654074 | orchestrator | Monday 13 April 2026 01:03:20 +0000 (0:00:01.066) 0:01:36.396 ********** 2026-04-13 01:05:11.654083 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.654091 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.654100 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.654108 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.654116 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.654124 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.654133 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.654141 | orchestrator | 2026-04-13 01:05:11.654149 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-13 01:05:11.654157 | orchestrator | Monday 13 April 2026 01:03:21 +0000 (0:00:00.821) 0:01:37.218 ********** 2026-04-13 01:05:11.654166 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.654174 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:11.654182 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:11.654190 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:11.654198 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:05:11.654206 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:05:11.654215 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:05:11.654223 | orchestrator | 2026-04-13 01:05:11.654231 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-13 01:05:11.654239 | orchestrator | Monday 13 April 2026 01:03:22 +0000 (0:00:01.058) 0:01:38.276 ********** 2026-04-13 01:05:11.654248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.654268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.654277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.654286 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-13 01:05:11.654300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.654309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.654327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.654340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:05:11.654383 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654401 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654448 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-13 01:05:11.654461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:05:11.654532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:05:11.654561 | orchestrator | 2026-04-13 01:05:11.654570 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-13 01:05:11.654583 | orchestrator | Monday 13 April 2026 01:03:27 +0000 (0:00:04.897) 0:01:43.173 ********** 2026-04-13 01:05:11.654591 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-13 01:05:11.654600 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:05:11.654608 | orchestrator | 2026-04-13 01:05:11.654616 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:05:11.654625 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:01.035) 0:01:44.209 ********** 2026-04-13 01:05:11.654633 | orchestrator | 2026-04-13 01:05:11.654641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:05:11.654650 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:00.064) 0:01:44.273 ********** 2026-04-13 01:05:11.654658 | orchestrator | 2026-04-13 01:05:11.654666 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:05:11.654675 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:00.061) 0:01:44.334 ********** 2026-04-13 01:05:11.654683 | orchestrator | 2026-04-13 01:05:11.654692 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:05:11.654700 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:00.060) 0:01:44.395 ********** 2026-04-13 01:05:11.654708 | orchestrator | 2026-04-13 01:05:11.654717 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:05:11.654725 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:00.059) 0:01:44.455 ********** 2026-04-13 01:05:11.654734 | orchestrator | 2026-04-13 01:05:11.654742 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:05:11.654750 | orchestrator | Monday 13 April 2026 01:03:29 +0000 (0:00:00.061) 0:01:44.516 ********** 2026-04-13 01:05:11.654759 | orchestrator | 2026-04-13 01:05:11.654767 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:05:11.654775 | orchestrator | Monday 13 April 2026 01:03:29 +0000 (0:00:00.061) 0:01:44.578 ********** 2026-04-13 01:05:11.654784 | orchestrator | 2026-04-13 01:05:11.654792 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-13 01:05:11.654800 | orchestrator | Monday 13 April 2026 01:03:29 +0000 (0:00:00.089) 0:01:44.667 ********** 2026-04-13 01:05:11.654809 | orchestrator | changed: [testbed-manager] 2026-04-13 01:05:11.654817 | orchestrator | 2026-04-13 01:05:11.654826 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-13 01:05:11.654838 | orchestrator | Monday 13 April 2026 01:03:43 +0000 (0:00:14.545) 0:01:59.213 ********** 2026-04-13 01:05:11.654846 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:11.654855 | orchestrator | changed: [testbed-manager] 2026-04-13 01:05:11.654863 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:11.654871 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:11.654880 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:05:11.654888 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:05:11.654896 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:05:11.654904 | orchestrator | 2026-04-13 01:05:11.654913 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-13 01:05:11.654921 | orchestrator | Monday 13 April 2026 01:04:00 +0000 (0:00:16.442) 0:02:15.656 ********** 2026-04-13 01:05:11.654930 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:11.654938 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:11.654946 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:11.654955 | orchestrator | 2026-04-13 01:05:11.654963 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-13 01:05:11.654971 | orchestrator | Monday 13 April 2026 01:04:10 +0000 (0:00:10.045) 0:02:25.702 ********** 2026-04-13 01:05:11.654979 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:11.654988 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:11.654996 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:11.655004 | orchestrator | 2026-04-13 01:05:11.655013 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-13 01:05:11.655025 | orchestrator | Monday 13 April 2026 01:04:20 +0000 (0:00:10.170) 0:02:35.873 ********** 2026-04-13 01:05:11.655033 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:11.655042 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:05:11.655050 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:05:11.655104 | orchestrator | changed: [testbed-manager] 2026-04-13 01:05:11.655118 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:11.655130 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:11.655138 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:05:11.655146 | orchestrator | 2026-04-13 01:05:11.655155 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-13 01:05:11.655167 | orchestrator | Monday 13 April 2026 01:04:35 +0000 (0:00:14.623) 0:02:50.496 ********** 2026-04-13 01:05:11.655176 | orchestrator | changed: [testbed-manager] 2026-04-13 01:05:11.655184 | orchestrator | 2026-04-13 01:05:11.655192 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-13 01:05:11.655200 | orchestrator | Monday 13 April 2026 01:04:41 +0000 (0:00:06.861) 0:02:57.357 ********** 2026-04-13 01:05:11.655209 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:11.655217 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:11.655225 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:11.655234 | orchestrator | 2026-04-13 01:05:11.655242 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-13 01:05:11.655250 | orchestrator | Monday 13 April 2026 01:04:55 +0000 (0:00:13.714) 0:03:11.072 ********** 2026-04-13 01:05:11.655259 | orchestrator | changed: [testbed-manager] 2026-04-13 01:05:11.655267 | orchestrator | 2026-04-13 01:05:11.655275 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-13 01:05:11.655283 | orchestrator | Monday 13 April 2026 01:05:00 +0000 (0:00:04.600) 0:03:15.673 ********** 2026-04-13 01:05:11.655292 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:05:11.655300 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:05:11.655308 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:05:11.655317 | orchestrator | 2026-04-13 01:05:11.655325 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:05:11.655333 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-13 01:05:11.655342 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-13 01:05:11.655350 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-13 01:05:11.655359 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-13 01:05:11.655367 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 01:05:11.655375 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 01:05:11.655384 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-13 01:05:11.655392 | orchestrator | 2026-04-13 01:05:11.655400 | orchestrator | 2026-04-13 01:05:11.655408 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:05:11.655417 | orchestrator | Monday 13 April 2026 01:05:10 +0000 (0:00:10.367) 0:03:26.040 ********** 2026-04-13 01:05:11.655425 | orchestrator | =============================================================================== 2026-04-13 01:05:11.655433 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.84s 2026-04-13 01:05:11.655448 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 21.24s 2026-04-13 01:05:11.655456 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.44s 2026-04-13 01:05:11.655464 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.62s 2026-04-13 01:05:11.655473 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.55s 2026-04-13 01:05:11.655486 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.71s 2026-04-13 01:05:11.655494 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.37s 2026-04-13 01:05:11.655502 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.17s 2026-04-13 01:05:11.655511 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.05s 2026-04-13 01:05:11.655519 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.86s 2026-04-13 01:05:11.655527 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.33s 2026-04-13 01:05:11.655536 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.00s 2026-04-13 01:05:11.655544 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.03s 2026-04-13 01:05:11.655552 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.90s 2026-04-13 01:05:11.655560 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.60s 2026-04-13 01:05:11.655567 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.42s 2026-04-13 01:05:11.655576 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.86s 2026-04-13 01:05:11.655587 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.75s 2026-04-13 01:05:11.655599 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.54s 2026-04-13 01:05:11.655610 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.08s 2026-04-13 01:05:14.693876 | orchestrator | 2026-04-13 01:05:14 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:14.693998 | orchestrator | 2026-04-13 01:05:14 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:14.695241 | orchestrator | 2026-04-13 01:05:14.695284 | orchestrator | 2026-04-13 01:05:14 | INFO  | Task 9ee6fb3f-df81-4ab0-9ef5-3cd2fefe51a5 is in state SUCCESS 2026-04-13 01:05:14.696801 | orchestrator | 2026-04-13 01:05:14.696853 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:05:14.696863 | orchestrator | 2026-04-13 01:05:14.696869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:05:14.696876 | orchestrator | Monday 13 April 2026 01:01:52 +0000 (0:00:00.370) 0:00:00.370 ********** 2026-04-13 01:05:14.696882 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:05:14.696888 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:05:14.696894 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:05:14.696900 | orchestrator | 2026-04-13 01:05:14.696906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:05:14.696912 | orchestrator | Monday 13 April 2026 01:01:52 +0000 (0:00:00.319) 0:00:00.690 ********** 2026-04-13 01:05:14.696918 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-13 01:05:14.696924 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-13 01:05:14.696929 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-13 01:05:14.696935 | orchestrator | 2026-04-13 01:05:14.696941 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-13 01:05:14.696946 | orchestrator | 2026-04-13 01:05:14.696952 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-13 01:05:14.696957 | orchestrator | Monday 13 April 2026 01:01:53 +0000 (0:00:00.333) 0:00:01.023 ********** 2026-04-13 01:05:14.696963 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:05:14.696986 | orchestrator | 2026-04-13 01:05:14.696992 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-13 01:05:14.696998 | orchestrator | Monday 13 April 2026 01:01:54 +0000 (0:00:00.904) 0:00:01.927 ********** 2026-04-13 01:05:14.697003 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-13 01:05:14.697009 | orchestrator | 2026-04-13 01:05:14.697015 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-13 01:05:14.697020 | orchestrator | Monday 13 April 2026 01:02:07 +0000 (0:00:13.180) 0:00:15.107 ********** 2026-04-13 01:05:14.697026 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-13 01:05:14.697032 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-13 01:05:14.697038 | orchestrator | 2026-04-13 01:05:14.697043 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-13 01:05:14.697049 | orchestrator | Monday 13 April 2026 01:02:14 +0000 (0:00:07.210) 0:00:22.318 ********** 2026-04-13 01:05:14.697081 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-13 01:05:14.697087 | orchestrator | 2026-04-13 01:05:14.697093 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-13 01:05:14.697099 | orchestrator | Monday 13 April 2026 01:02:18 +0000 (0:00:03.565) 0:00:25.883 ********** 2026-04-13 01:05:14.697105 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-13 01:05:14.697112 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:05:14.697117 | orchestrator | 2026-04-13 01:05:14.697123 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-13 01:05:14.697128 | orchestrator | Monday 13 April 2026 01:02:21 +0000 (0:00:03.923) 0:00:29.807 ********** 2026-04-13 01:05:14.697134 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:05:14.697140 | orchestrator | 2026-04-13 01:05:14.697146 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-13 01:05:14.697151 | orchestrator | Monday 13 April 2026 01:02:24 +0000 (0:00:02.909) 0:00:32.716 ********** 2026-04-13 01:05:14.697157 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-13 01:05:14.697162 | orchestrator | 2026-04-13 01:05:14.697169 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-13 01:05:14.697178 | orchestrator | Monday 13 April 2026 01:02:28 +0000 (0:00:03.196) 0:00:35.912 ********** 2026-04-13 01:05:14.697216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697257 | orchestrator | 2026-04-13 01:05:14.697264 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-13 01:05:14.697272 | orchestrator | Monday 13 April 2026 01:02:32 +0000 (0:00:03.950) 0:00:39.862 ********** 2026-04-13 01:05:14.697284 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:05:14.697293 | orchestrator | 2026-04-13 01:05:14.697307 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-13 01:05:14.697321 | orchestrator | Monday 13 April 2026 01:02:32 +0000 (0:00:00.740) 0:00:40.602 ********** 2026-04-13 01:05:14.697329 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:14.697338 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:14.697346 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.697354 | orchestrator | 2026-04-13 01:05:14.697362 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-13 01:05:14.697371 | orchestrator | Monday 13 April 2026 01:02:37 +0000 (0:00:04.357) 0:00:44.960 ********** 2026-04-13 01:05:14.697381 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:14.697391 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:14.697401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:14.697410 | orchestrator | 2026-04-13 01:05:14.697419 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-13 01:05:14.697429 | orchestrator | Monday 13 April 2026 01:02:38 +0000 (0:00:01.810) 0:00:46.770 ********** 2026-04-13 01:05:14.697438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:14.697448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:14.697457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:14.697466 | orchestrator | 2026-04-13 01:05:14.697476 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-13 01:05:14.697490 | orchestrator | Monday 13 April 2026 01:02:40 +0000 (0:00:01.481) 0:00:48.252 ********** 2026-04-13 01:05:14.697499 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:05:14.697508 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:05:14.697516 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:05:14.697525 | orchestrator | 2026-04-13 01:05:14.697534 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-13 01:05:14.697542 | orchestrator | Monday 13 April 2026 01:02:41 +0000 (0:00:00.908) 0:00:49.160 ********** 2026-04-13 01:05:14.697551 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.697559 | orchestrator | 2026-04-13 01:05:14.697568 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-13 01:05:14.697577 | orchestrator | Monday 13 April 2026 01:02:41 +0000 (0:00:00.216) 0:00:49.376 ********** 2026-04-13 01:05:14.697587 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.697596 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.697605 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.697614 | orchestrator | 2026-04-13 01:05:14.697622 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-13 01:05:14.697628 | orchestrator | Monday 13 April 2026 01:02:42 +0000 (0:00:00.586) 0:00:49.963 ********** 2026-04-13 01:05:14.697634 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:05:14.697640 | orchestrator | 2026-04-13 01:05:14.697646 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-13 01:05:14.697651 | orchestrator | Monday 13 April 2026 01:02:43 +0000 (0:00:00.902) 0:00:50.866 ********** 2026-04-13 01:05:14.697663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697711 | orchestrator | 2026-04-13 01:05:14.697717 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-13 01:05:14.697722 | orchestrator | Monday 13 April 2026 01:02:48 +0000 (0:00:05.626) 0:00:56.492 ********** 2026-04-13 01:05:14.697736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 01:05:14.697743 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.697750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 01:05:14.697759 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.697773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 01:05:14.697780 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.697786 | orchestrator | 2026-04-13 01:05:14.697792 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-13 01:05:14.697797 | orchestrator | Monday 13 April 2026 01:02:52 +0000 (0:00:03.537) 0:01:00.030 ********** 2026-04-13 01:05:14.697803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 01:05:14.697810 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.697819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 01:05:14.697829 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.697840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 01:05:14.697846 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.697852 | orchestrator | 2026-04-13 01:05:14.697858 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-13 01:05:14.697863 | orchestrator | Monday 13 April 2026 01:02:56 +0000 (0:00:03.878) 0:01:03.908 ********** 2026-04-13 01:05:14.697869 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.697875 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.697881 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.697886 | orchestrator | 2026-04-13 01:05:14.697892 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-13 01:05:14.697901 | orchestrator | Monday 13 April 2026 01:03:00 +0000 (0:00:04.020) 0:01:07.929 ********** 2026-04-13 01:05:14.697908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.697938 | orchestrator | 2026-04-13 01:05:14.697944 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-13 01:05:14.697950 | orchestrator | Monday 13 April 2026 01:03:05 +0000 (0:00:05.816) 0:01:13.745 ********** 2026-04-13 01:05:14.697955 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:14.697961 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:14.697967 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.697973 | orchestrator | 2026-04-13 01:05:14.697978 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-13 01:05:14.697984 | orchestrator | Monday 13 April 2026 01:03:13 +0000 (0:00:07.633) 0:01:21.379 ********** 2026-04-13 01:05:14.697990 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.697995 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698001 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698006 | orchestrator | 2026-04-13 01:05:14.698046 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-13 01:05:14.698077 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:04.596) 0:01:25.975 ********** 2026-04-13 01:05:14.698084 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698095 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698104 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698114 | orchestrator | 2026-04-13 01:05:14.698123 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-13 01:05:14.698137 | orchestrator | Monday 13 April 2026 01:03:21 +0000 (0:00:03.751) 0:01:29.727 ********** 2026-04-13 01:05:14.698146 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698154 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698169 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698180 | orchestrator | 2026-04-13 01:05:14.698190 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-13 01:05:14.698200 | orchestrator | Monday 13 April 2026 01:03:25 +0000 (0:00:04.042) 0:01:33.769 ********** 2026-04-13 01:05:14.698210 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698219 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698225 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698230 | orchestrator | 2026-04-13 01:05:14.698236 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-13 01:05:14.698242 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:02.692) 0:01:36.462 ********** 2026-04-13 01:05:14.698247 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698253 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698259 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698264 | orchestrator | 2026-04-13 01:05:14.698270 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-13 01:05:14.698275 | orchestrator | Monday 13 April 2026 01:03:29 +0000 (0:00:00.452) 0:01:36.914 ********** 2026-04-13 01:05:14.698287 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-13 01:05:14.698293 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698299 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-13 01:05:14.698305 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698310 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-13 01:05:14.698316 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698322 | orchestrator | 2026-04-13 01:05:14.698328 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-13 01:05:14.698333 | orchestrator | Monday 13 April 2026 01:03:33 +0000 (0:00:04.407) 0:01:41.322 ********** 2026-04-13 01:05:14.698339 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698345 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698350 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698356 | orchestrator | 2026-04-13 01:05:14.698362 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-13 01:05:14.698367 | orchestrator | Monday 13 April 2026 01:03:38 +0000 (0:00:05.198) 0:01:46.521 ********** 2026-04-13 01:05:14.698373 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698379 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698385 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698391 | orchestrator | 2026-04-13 01:05:14.698396 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-13 01:05:14.698402 | orchestrator | Monday 13 April 2026 01:03:43 +0000 (0:00:04.538) 0:01:51.059 ********** 2026-04-13 01:05:14.698409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.698424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.698436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 01:05:14.698442 | orchestrator | 2026-04-13 01:05:14.698448 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-13 01:05:14.698454 | orchestrator | Monday 13 April 2026 01:03:52 +0000 (0:00:08.962) 0:02:00.021 ********** 2026-04-13 01:05:14.698459 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:14.698465 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:14.698470 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:14.698476 | orchestrator | 2026-04-13 01:05:14.698482 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-13 01:05:14.698487 | orchestrator | Monday 13 April 2026 01:03:52 +0000 (0:00:00.399) 0:02:00.421 ********** 2026-04-13 01:05:14.698493 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.698502 | orchestrator | 2026-04-13 01:05:14.698512 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-13 01:05:14.698521 | orchestrator | Monday 13 April 2026 01:03:54 +0000 (0:00:01.994) 0:02:02.416 ********** 2026-04-13 01:05:14.698531 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.698539 | orchestrator | 2026-04-13 01:05:14.698549 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-13 01:05:14.698558 | orchestrator | Monday 13 April 2026 01:03:56 +0000 (0:00:02.361) 0:02:04.778 ********** 2026-04-13 01:05:14.698574 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.698583 | orchestrator | 2026-04-13 01:05:14.698592 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-13 01:05:14.698601 | orchestrator | Monday 13 April 2026 01:03:59 +0000 (0:00:02.071) 0:02:06.849 ********** 2026-04-13 01:05:14.698610 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.698617 | orchestrator | 2026-04-13 01:05:14.698632 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-13 01:05:14.698641 | orchestrator | Monday 13 April 2026 01:04:29 +0000 (0:00:30.073) 0:02:36.923 ********** 2026-04-13 01:05:14.698651 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.698660 | orchestrator | 2026-04-13 01:05:14.698676 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-13 01:05:14.698683 | orchestrator | Monday 13 April 2026 01:04:31 +0000 (0:00:02.069) 0:02:38.992 ********** 2026-04-13 01:05:14.698689 | orchestrator | 2026-04-13 01:05:14.698694 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-13 01:05:14.698700 | orchestrator | Monday 13 April 2026 01:04:31 +0000 (0:00:00.064) 0:02:39.056 ********** 2026-04-13 01:05:14.698705 | orchestrator | 2026-04-13 01:05:14.698711 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-13 01:05:14.698717 | orchestrator | Monday 13 April 2026 01:04:31 +0000 (0:00:00.068) 0:02:39.125 ********** 2026-04-13 01:05:14.698722 | orchestrator | 2026-04-13 01:05:14.698728 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-13 01:05:14.698733 | orchestrator | Monday 13 April 2026 01:04:31 +0000 (0:00:00.065) 0:02:39.191 ********** 2026-04-13 01:05:14.698739 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:14.698745 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:14.698750 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:14.698756 | orchestrator | 2026-04-13 01:05:14.698762 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:05:14.698768 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-13 01:05:14.698775 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-13 01:05:14.698781 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-13 01:05:14.698787 | orchestrator | 2026-04-13 01:05:14.698793 | orchestrator | 2026-04-13 01:05:14.698798 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:05:14.698804 | orchestrator | Monday 13 April 2026 01:05:12 +0000 (0:00:40.902) 0:03:20.094 ********** 2026-04-13 01:05:14.698809 | orchestrator | =============================================================================== 2026-04-13 01:05:14.698815 | orchestrator | glance : Restart glance-api container ---------------------------------- 40.90s 2026-04-13 01:05:14.698821 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.07s 2026-04-13 01:05:14.698826 | orchestrator | service-ks-register : glance | Creating services ----------------------- 13.18s 2026-04-13 01:05:14.698832 | orchestrator | glance : Check glance containers ---------------------------------------- 8.96s 2026-04-13 01:05:14.698838 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.63s 2026-04-13 01:05:14.698843 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.21s 2026-04-13 01:05:14.698849 | orchestrator | glance : Copying over config.json files for services -------------------- 5.82s 2026-04-13 01:05:14.698855 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.63s 2026-04-13 01:05:14.698860 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.20s 2026-04-13 01:05:14.698866 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.60s 2026-04-13 01:05:14.698876 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.54s 2026-04-13 01:05:14.698882 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.41s 2026-04-13 01:05:14.698887 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.36s 2026-04-13 01:05:14.698893 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.04s 2026-04-13 01:05:14.698899 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.02s 2026-04-13 01:05:14.698905 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.95s 2026-04-13 01:05:14.698910 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.92s 2026-04-13 01:05:14.698916 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.88s 2026-04-13 01:05:14.698922 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.75s 2026-04-13 01:05:14.698927 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.57s 2026-04-13 01:05:14.698933 | orchestrator | 2026-04-13 01:05:14 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:14.704529 | orchestrator | 2026-04-13 01:05:14 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:14.704611 | orchestrator | 2026-04-13 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:17.740141 | orchestrator | 2026-04-13 01:05:17 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:17.742099 | orchestrator | 2026-04-13 01:05:17 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:17.745090 | orchestrator | 2026-04-13 01:05:17 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:17.746423 | orchestrator | 2026-04-13 01:05:17 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:17.746674 | orchestrator | 2026-04-13 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:20.791849 | orchestrator | 2026-04-13 01:05:20 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:20.794628 | orchestrator | 2026-04-13 01:05:20 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:20.795878 | orchestrator | 2026-04-13 01:05:20 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:20.797873 | orchestrator | 2026-04-13 01:05:20 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:20.797949 | orchestrator | 2026-04-13 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:23.851338 | orchestrator | 2026-04-13 01:05:23 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:23.853338 | orchestrator | 2026-04-13 01:05:23 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:23.854976 | orchestrator | 2026-04-13 01:05:23 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:23.856572 | orchestrator | 2026-04-13 01:05:23 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:23.856599 | orchestrator | 2026-04-13 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:26.903245 | orchestrator | 2026-04-13 01:05:26 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:26.903760 | orchestrator | 2026-04-13 01:05:26 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:26.905500 | orchestrator | 2026-04-13 01:05:26 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:26.906521 | orchestrator | 2026-04-13 01:05:26 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:26.906557 | orchestrator | 2026-04-13 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:29.953494 | orchestrator | 2026-04-13 01:05:29 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:29.955326 | orchestrator | 2026-04-13 01:05:29 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:29.958712 | orchestrator | 2026-04-13 01:05:29 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:29.960964 | orchestrator | 2026-04-13 01:05:29 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:29.961014 | orchestrator | 2026-04-13 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:33.009851 | orchestrator | 2026-04-13 01:05:33 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:33.010798 | orchestrator | 2026-04-13 01:05:33 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:33.012329 | orchestrator | 2026-04-13 01:05:33 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:33.013640 | orchestrator | 2026-04-13 01:05:33 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:33.013686 | orchestrator | 2026-04-13 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:36.057374 | orchestrator | 2026-04-13 01:05:36 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:36.058510 | orchestrator | 2026-04-13 01:05:36 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:36.060273 | orchestrator | 2026-04-13 01:05:36 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:36.062188 | orchestrator | 2026-04-13 01:05:36 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:36.062242 | orchestrator | 2026-04-13 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:39.115641 | orchestrator | 2026-04-13 01:05:39 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:39.117749 | orchestrator | 2026-04-13 01:05:39 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:39.120271 | orchestrator | 2026-04-13 01:05:39 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:39.122215 | orchestrator | 2026-04-13 01:05:39 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:39.122541 | orchestrator | 2026-04-13 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:42.165579 | orchestrator | 2026-04-13 01:05:42 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:42.165784 | orchestrator | 2026-04-13 01:05:42 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:42.168899 | orchestrator | 2026-04-13 01:05:42 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:42.169270 | orchestrator | 2026-04-13 01:05:42 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:42.169308 | orchestrator | 2026-04-13 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:45.228148 | orchestrator | 2026-04-13 01:05:45 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:45.229784 | orchestrator | 2026-04-13 01:05:45 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:45.231641 | orchestrator | 2026-04-13 01:05:45 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:45.233503 | orchestrator | 2026-04-13 01:05:45 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:45.233535 | orchestrator | 2026-04-13 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:48.270410 | orchestrator | 2026-04-13 01:05:48 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:48.273290 | orchestrator | 2026-04-13 01:05:48 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:48.277143 | orchestrator | 2026-04-13 01:05:48 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:48.279705 | orchestrator | 2026-04-13 01:05:48 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:48.279811 | orchestrator | 2026-04-13 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:51.336022 | orchestrator | 2026-04-13 01:05:51 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state STARTED 2026-04-13 01:05:51.339232 | orchestrator | 2026-04-13 01:05:51 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:51.341164 | orchestrator | 2026-04-13 01:05:51 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:51.344294 | orchestrator | 2026-04-13 01:05:51 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:51.344346 | orchestrator | 2026-04-13 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:54.383141 | orchestrator | 2026-04-13 01:05:54 | INFO  | Task d86f12fb-faf2-4dc8-b707-4b8e064ebab3 is in state SUCCESS 2026-04-13 01:05:54.384912 | orchestrator | 2026-04-13 01:05:54.384975 | orchestrator | 2026-04-13 01:05:54.384987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:05:54.384997 | orchestrator | 2026-04-13 01:05:54.385006 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:05:54.385015 | orchestrator | Monday 13 April 2026 01:02:24 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-04-13 01:05:54.385043 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:05:54.385054 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:05:54.385062 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:05:54.385070 | orchestrator | 2026-04-13 01:05:54.385079 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:05:54.385087 | orchestrator | Monday 13 April 2026 01:02:24 +0000 (0:00:00.403) 0:00:00.663 ********** 2026-04-13 01:05:54.385097 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-13 01:05:54.385105 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-13 01:05:54.385114 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-13 01:05:54.385122 | orchestrator | 2026-04-13 01:05:54.385131 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-13 01:05:54.385144 | orchestrator | 2026-04-13 01:05:54.385158 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-13 01:05:54.385170 | orchestrator | Monday 13 April 2026 01:02:25 +0000 (0:00:00.272) 0:00:00.935 ********** 2026-04-13 01:05:54.385179 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:05:54.385188 | orchestrator | 2026-04-13 01:05:54.385199 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-13 01:05:54.385212 | orchestrator | Monday 13 April 2026 01:02:25 +0000 (0:00:00.568) 0:00:01.504 ********** 2026-04-13 01:05:54.385223 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-13 01:05:54.385261 | orchestrator | 2026-04-13 01:05:54.385275 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-13 01:05:54.385289 | orchestrator | Monday 13 April 2026 01:02:29 +0000 (0:00:03.384) 0:00:04.888 ********** 2026-04-13 01:05:54.385299 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-13 01:05:54.385321 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-13 01:05:54.385329 | orchestrator | 2026-04-13 01:05:54.385337 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-13 01:05:54.385346 | orchestrator | Monday 13 April 2026 01:02:36 +0000 (0:00:07.050) 0:00:11.939 ********** 2026-04-13 01:05:54.385354 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:05:54.385362 | orchestrator | 2026-04-13 01:05:54.385370 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-13 01:05:54.385378 | orchestrator | Monday 13 April 2026 01:02:39 +0000 (0:00:03.335) 0:00:15.274 ********** 2026-04-13 01:05:54.385434 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-13 01:05:54.385444 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:05:54.385452 | orchestrator | 2026-04-13 01:05:54.385460 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-13 01:05:54.385468 | orchestrator | Monday 13 April 2026 01:02:43 +0000 (0:00:04.423) 0:00:19.698 ********** 2026-04-13 01:05:54.385477 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:05:54.385487 | orchestrator | 2026-04-13 01:05:54.385497 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-13 01:05:54.385506 | orchestrator | Monday 13 April 2026 01:02:47 +0000 (0:00:03.439) 0:00:23.137 ********** 2026-04-13 01:05:54.385515 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-13 01:05:54.385524 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-13 01:05:54.385534 | orchestrator | 2026-04-13 01:05:54.385543 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-13 01:05:54.385646 | orchestrator | Monday 13 April 2026 01:02:55 +0000 (0:00:07.606) 0:00:30.743 ********** 2026-04-13 01:05:54.385661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.385690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.385731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.385748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.385849 | orchestrator | 2026-04-13 01:05:54.385858 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-13 01:05:54.385868 | orchestrator | Monday 13 April 2026 01:02:58 +0000 (0:00:03.013) 0:00:33.757 ********** 2026-04-13 01:05:54.385877 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.385886 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:54.385894 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:54.385902 | orchestrator | 2026-04-13 01:05:54.385911 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-13 01:05:54.385919 | orchestrator | Monday 13 April 2026 01:02:58 +0000 (0:00:00.288) 0:00:34.045 ********** 2026-04-13 01:05:54.385927 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:05:54.385942 | orchestrator | 2026-04-13 01:05:54.385950 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-13 01:05:54.386174 | orchestrator | Monday 13 April 2026 01:02:59 +0000 (0:00:00.896) 0:00:34.942 ********** 2026-04-13 01:05:54.386191 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-13 01:05:54.386200 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-13 01:05:54.386208 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-13 01:05:54.386216 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-13 01:05:54.386224 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-13 01:05:54.386306 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-13 01:05:54.386316 | orchestrator | 2026-04-13 01:05:54.386324 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-13 01:05:54.386333 | orchestrator | Monday 13 April 2026 01:03:02 +0000 (0:00:02.902) 0:00:37.844 ********** 2026-04-13 01:05:54.386343 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-13 01:05:54.386359 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-13 01:05:54.386368 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-13 01:05:54.386378 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-13 01:05:54.386404 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-13 01:05:54.386414 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-13 01:05:54.386423 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-13 01:05:54.386432 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-13 01:05:54.386441 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-13 01:05:54.386461 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-13 01:05:54.386471 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-13 01:05:54.386566 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-13 01:05:54.386584 | orchestrator | 2026-04-13 01:05:54.386593 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-13 01:05:54.386601 | orchestrator | Monday 13 April 2026 01:03:07 +0000 (0:00:04.902) 0:00:42.746 ********** 2026-04-13 01:05:54.386610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:54.386619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:54.386627 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-13 01:05:54.386635 | orchestrator | 2026-04-13 01:05:54.386644 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-13 01:05:54.386652 | orchestrator | Monday 13 April 2026 01:03:09 +0000 (0:00:02.024) 0:00:44.771 ********** 2026-04-13 01:05:54.386661 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-13 01:05:54.386676 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-13 01:05:54.386689 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-13 01:05:54.386697 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:05:54.386706 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:05:54.386714 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:05:54.386728 | orchestrator | 2026-04-13 01:05:54.386736 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-13 01:05:54.386744 | orchestrator | Monday 13 April 2026 01:03:13 +0000 (0:00:04.188) 0:00:48.960 ********** 2026-04-13 01:05:54.386753 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-13 01:05:54.386764 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-13 01:05:54.386777 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-13 01:05:54.386786 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-13 01:05:54.386794 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-13 01:05:54.386802 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-13 01:05:54.386810 | orchestrator | 2026-04-13 01:05:54.386818 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-13 01:05:54.386826 | orchestrator | Monday 13 April 2026 01:03:14 +0000 (0:00:01.257) 0:00:50.218 ********** 2026-04-13 01:05:54.386835 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.386843 | orchestrator | 2026-04-13 01:05:54.386851 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-13 01:05:54.386859 | orchestrator | Monday 13 April 2026 01:03:14 +0000 (0:00:00.499) 0:00:50.717 ********** 2026-04-13 01:05:54.386868 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.386876 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:54.386884 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:54.386893 | orchestrator | 2026-04-13 01:05:54.386901 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-13 01:05:54.386909 | orchestrator | Monday 13 April 2026 01:03:15 +0000 (0:00:00.454) 0:00:51.172 ********** 2026-04-13 01:05:54.386917 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:05:54.386939 | orchestrator | 2026-04-13 01:05:54.386948 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-13 01:05:54.386957 | orchestrator | Monday 13 April 2026 01:03:15 +0000 (0:00:00.520) 0:00:51.692 ********** 2026-04-13 01:05:54.386966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.386979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.386988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.387054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387148 | orchestrator | 2026-04-13 01:05:54.387157 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-13 01:05:54.387165 | orchestrator | Monday 13 April 2026 01:03:21 +0000 (0:00:05.209) 0:00:56.902 ********** 2026-04-13 01:05:54.387178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.387193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387219 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.387233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.387242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387278 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:54.387287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.387300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387332 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:54.387341 | orchestrator | 2026-04-13 01:05:54.387349 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-13 01:05:54.387361 | orchestrator | Monday 13 April 2026 01:03:22 +0000 (0:00:00.931) 0:00:57.834 ********** 2026-04-13 01:05:54.387370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.387379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387413 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.387422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.387442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387469 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:54.387482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.387492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.387528 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:54.387536 | orchestrator | 2026-04-13 01:05:54.387545 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-13 01:05:54.387553 | orchestrator | Monday 13 April 2026 01:03:23 +0000 (0:00:01.657) 0:00:59.492 ********** 2026-04-13 01:05:54.387562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.387576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.387585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.387602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387695 | orchestrator | 2026-04-13 01:05:54.387703 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-13 01:05:54.387712 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:04.930) 0:01:04.422 ********** 2026-04-13 01:05:54.387720 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-13 01:05:54.387728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-13 01:05:54.387737 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-13 01:05:54.387745 | orchestrator | 2026-04-13 01:05:54.387753 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-13 01:05:54.387762 | orchestrator | Monday 13 April 2026 01:03:31 +0000 (0:00:02.332) 0:01:06.755 ********** 2026-04-13 01:05:54.387775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.387789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.387802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.387815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.387923 | orchestrator | 2026-04-13 01:05:54.387932 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-13 01:05:54.387940 | orchestrator | Monday 13 April 2026 01:03:50 +0000 (0:00:19.626) 0:01:26.382 ********** 2026-04-13 01:05:54.387949 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.387957 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:54.387965 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:54.387973 | orchestrator | 2026-04-13 01:05:54.387981 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-13 01:05:54.387990 | orchestrator | Monday 13 April 2026 01:03:52 +0000 (0:00:02.187) 0:01:28.569 ********** 2026-04-13 01:05:54.388099 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.388109 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:54.388117 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:54.388125 | orchestrator | 2026-04-13 01:05:54.388133 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-13 01:05:54.388142 | orchestrator | Monday 13 April 2026 01:03:54 +0000 (0:00:01.475) 0:01:30.044 ********** 2026-04-13 01:05:54.388155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.388165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388201 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.388217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.388226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388265 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:54.388273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-13 01:05:54.388292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 01:05:54.388324 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:54.388332 | orchestrator | 2026-04-13 01:05:54.388340 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-13 01:05:54.388349 | orchestrator | Monday 13 April 2026 01:03:55 +0000 (0:00:00.884) 0:01:30.929 ********** 2026-04-13 01:05:54.388357 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.388365 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:54.388377 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:54.388385 | orchestrator | 2026-04-13 01:05:54.388394 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-13 01:05:54.388402 | orchestrator | Monday 13 April 2026 01:03:55 +0000 (0:00:00.341) 0:01:31.271 ********** 2026-04-13 01:05:54.388410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.388424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.388442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-13 01:05:54.388454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-13 01:05:54.388555 | orchestrator | 2026-04-13 01:05:54.388562 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-13 01:05:54.388569 | orchestrator | Monday 13 April 2026 01:03:58 +0000 (0:00:03.039) 0:01:34.311 ********** 2026-04-13 01:05:54.388576 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.388583 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:05:54.388590 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:05:54.388597 | orchestrator | 2026-04-13 01:05:54.388604 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-13 01:05:54.388611 | orchestrator | Monday 13 April 2026 01:03:58 +0000 (0:00:00.301) 0:01:34.613 ********** 2026-04-13 01:05:54.388618 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.388625 | orchestrator | 2026-04-13 01:05:54.388635 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-13 01:05:54.388643 | orchestrator | Monday 13 April 2026 01:04:00 +0000 (0:00:02.106) 0:01:36.720 ********** 2026-04-13 01:05:54.388650 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.388656 | orchestrator | 2026-04-13 01:05:54.388663 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-13 01:05:54.388670 | orchestrator | Monday 13 April 2026 01:04:03 +0000 (0:00:02.333) 0:01:39.053 ********** 2026-04-13 01:05:54.388677 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.388684 | orchestrator | 2026-04-13 01:05:54.388691 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-13 01:05:54.388698 | orchestrator | Monday 13 April 2026 01:04:24 +0000 (0:00:21.193) 0:02:00.247 ********** 2026-04-13 01:05:54.388705 | orchestrator | 2026-04-13 01:05:54.388712 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-13 01:05:54.388719 | orchestrator | Monday 13 April 2026 01:04:24 +0000 (0:00:00.076) 0:02:00.323 ********** 2026-04-13 01:05:54.388726 | orchestrator | 2026-04-13 01:05:54.388733 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-13 01:05:54.388739 | orchestrator | Monday 13 April 2026 01:04:24 +0000 (0:00:00.069) 0:02:00.393 ********** 2026-04-13 01:05:54.388746 | orchestrator | 2026-04-13 01:05:54.388753 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-13 01:05:54.388760 | orchestrator | Monday 13 April 2026 01:04:24 +0000 (0:00:00.070) 0:02:00.463 ********** 2026-04-13 01:05:54.388767 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.388774 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:54.388781 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:54.388787 | orchestrator | 2026-04-13 01:05:54.388794 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-13 01:05:54.388922 | orchestrator | Monday 13 April 2026 01:04:58 +0000 (0:00:33.574) 0:02:34.037 ********** 2026-04-13 01:05:54.388933 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.388940 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:54.388948 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:54.388955 | orchestrator | 2026-04-13 01:05:54.388962 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-13 01:05:54.388969 | orchestrator | Monday 13 April 2026 01:05:12 +0000 (0:00:14.255) 0:02:48.293 ********** 2026-04-13 01:05:54.388976 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.388982 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:54.388990 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:54.388996 | orchestrator | 2026-04-13 01:05:54.389003 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-13 01:05:54.389010 | orchestrator | Monday 13 April 2026 01:05:41 +0000 (0:00:28.783) 0:03:17.077 ********** 2026-04-13 01:05:54.389017 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:05:54.389068 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:05:54.389077 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:05:54.389084 | orchestrator | 2026-04-13 01:05:54.389091 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-13 01:05:54.389105 | orchestrator | Monday 13 April 2026 01:05:52 +0000 (0:00:11.319) 0:03:28.397 ********** 2026-04-13 01:05:54.389112 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:05:54.389118 | orchestrator | 2026-04-13 01:05:54.389125 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:05:54.389133 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 01:05:54.389141 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:05:54.389148 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:05:54.389155 | orchestrator | 2026-04-13 01:05:54.389162 | orchestrator | 2026-04-13 01:05:54.389169 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:05:54.389180 | orchestrator | Monday 13 April 2026 01:05:53 +0000 (0:00:00.452) 0:03:28.849 ********** 2026-04-13 01:05:54.389187 | orchestrator | =============================================================================== 2026-04-13 01:05:54.389194 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 33.57s 2026-04-13 01:05:54.389200 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.78s 2026-04-13 01:05:54.389206 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.19s 2026-04-13 01:05:54.389213 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 19.63s 2026-04-13 01:05:54.389219 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 14.26s 2026-04-13 01:05:54.389226 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.32s 2026-04-13 01:05:54.389232 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.61s 2026-04-13 01:05:54.389238 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.05s 2026-04-13 01:05:54.389245 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.21s 2026-04-13 01:05:54.389251 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.93s 2026-04-13 01:05:54.389257 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.90s 2026-04-13 01:05:54.389264 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.42s 2026-04-13 01:05:54.389270 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.19s 2026-04-13 01:05:54.389277 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.44s 2026-04-13 01:05:54.389283 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.38s 2026-04-13 01:05:54.389290 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.34s 2026-04-13 01:05:54.389296 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.04s 2026-04-13 01:05:54.389302 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.01s 2026-04-13 01:05:54.389309 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.90s 2026-04-13 01:05:54.389315 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.33s 2026-04-13 01:05:54.389321 | orchestrator | 2026-04-13 01:05:54 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:54.389328 | orchestrator | 2026-04-13 01:05:54 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:54.389335 | orchestrator | 2026-04-13 01:05:54 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:54.389341 | orchestrator | 2026-04-13 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:57.420951 | orchestrator | 2026-04-13 01:05:57 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:05:57.421196 | orchestrator | 2026-04-13 01:05:57 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:05:57.421644 | orchestrator | 2026-04-13 01:05:57 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:05:57.423387 | orchestrator | 2026-04-13 01:05:57 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:05:57.423430 | orchestrator | 2026-04-13 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:00.519663 | orchestrator | 2026-04-13 01:06:00 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:00.520579 | orchestrator | 2026-04-13 01:06:00 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:00.522914 | orchestrator | 2026-04-13 01:06:00 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:00.525473 | orchestrator | 2026-04-13 01:06:00 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:00.525965 | orchestrator | 2026-04-13 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:03.558525 | orchestrator | 2026-04-13 01:06:03 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:03.559213 | orchestrator | 2026-04-13 01:06:03 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:03.561483 | orchestrator | 2026-04-13 01:06:03 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:03.562317 | orchestrator | 2026-04-13 01:06:03 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:03.562367 | orchestrator | 2026-04-13 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:06.591147 | orchestrator | 2026-04-13 01:06:06 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:06.591800 | orchestrator | 2026-04-13 01:06:06 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:06.592737 | orchestrator | 2026-04-13 01:06:06 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:06.595188 | orchestrator | 2026-04-13 01:06:06 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:06.595998 | orchestrator | 2026-04-13 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:09.646684 | orchestrator | 2026-04-13 01:06:09 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:09.647704 | orchestrator | 2026-04-13 01:06:09 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:09.650985 | orchestrator | 2026-04-13 01:06:09 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:09.651725 | orchestrator | 2026-04-13 01:06:09 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:09.651775 | orchestrator | 2026-04-13 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:12.683685 | orchestrator | 2026-04-13 01:06:12 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:12.685419 | orchestrator | 2026-04-13 01:06:12 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:12.685933 | orchestrator | 2026-04-13 01:06:12 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:12.686727 | orchestrator | 2026-04-13 01:06:12 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:12.686811 | orchestrator | 2026-04-13 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:15.718164 | orchestrator | 2026-04-13 01:06:15 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:15.718388 | orchestrator | 2026-04-13 01:06:15 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:15.719271 | orchestrator | 2026-04-13 01:06:15 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:15.720113 | orchestrator | 2026-04-13 01:06:15 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:15.720156 | orchestrator | 2026-04-13 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:18.739697 | orchestrator | 2026-04-13 01:06:18 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:18.740247 | orchestrator | 2026-04-13 01:06:18 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:18.742670 | orchestrator | 2026-04-13 01:06:18 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:18.743345 | orchestrator | 2026-04-13 01:06:18 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:18.743374 | orchestrator | 2026-04-13 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:21.768162 | orchestrator | 2026-04-13 01:06:21 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:21.768436 | orchestrator | 2026-04-13 01:06:21 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:21.768932 | orchestrator | 2026-04-13 01:06:21 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:21.769644 | orchestrator | 2026-04-13 01:06:21 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:21.769688 | orchestrator | 2026-04-13 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:24.797567 | orchestrator | 2026-04-13 01:06:24 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:24.797824 | orchestrator | 2026-04-13 01:06:24 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:24.798376 | orchestrator | 2026-04-13 01:06:24 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:24.799107 | orchestrator | 2026-04-13 01:06:24 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:24.799147 | orchestrator | 2026-04-13 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:27.828809 | orchestrator | 2026-04-13 01:06:27 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:27.829084 | orchestrator | 2026-04-13 01:06:27 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:27.829546 | orchestrator | 2026-04-13 01:06:27 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:27.830255 | orchestrator | 2026-04-13 01:06:27 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:27.830281 | orchestrator | 2026-04-13 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:30.851218 | orchestrator | 2026-04-13 01:06:30 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:30.851378 | orchestrator | 2026-04-13 01:06:30 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:30.852062 | orchestrator | 2026-04-13 01:06:30 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:30.852626 | orchestrator | 2026-04-13 01:06:30 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:30.852653 | orchestrator | 2026-04-13 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:33.875569 | orchestrator | 2026-04-13 01:06:33 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:33.875883 | orchestrator | 2026-04-13 01:06:33 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:33.879248 | orchestrator | 2026-04-13 01:06:33 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:33.880181 | orchestrator | 2026-04-13 01:06:33 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:33.880216 | orchestrator | 2026-04-13 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:36.905361 | orchestrator | 2026-04-13 01:06:36 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:36.905570 | orchestrator | 2026-04-13 01:06:36 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:36.906128 | orchestrator | 2026-04-13 01:06:36 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:36.906910 | orchestrator | 2026-04-13 01:06:36 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:36.906935 | orchestrator | 2026-04-13 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:39.936300 | orchestrator | 2026-04-13 01:06:39 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:39.936960 | orchestrator | 2026-04-13 01:06:39 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:39.938379 | orchestrator | 2026-04-13 01:06:39 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:39.940639 | orchestrator | 2026-04-13 01:06:39 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:39.940728 | orchestrator | 2026-04-13 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:42.961430 | orchestrator | 2026-04-13 01:06:42 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:42.961954 | orchestrator | 2026-04-13 01:06:42 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:42.970455 | orchestrator | 2026-04-13 01:06:42 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:42.970531 | orchestrator | 2026-04-13 01:06:42 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:42.970547 | orchestrator | 2026-04-13 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:46.004501 | orchestrator | 2026-04-13 01:06:46 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:46.004960 | orchestrator | 2026-04-13 01:06:46 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:46.005886 | orchestrator | 2026-04-13 01:06:46 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:46.006906 | orchestrator | 2026-04-13 01:06:46 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:46.006966 | orchestrator | 2026-04-13 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:49.040562 | orchestrator | 2026-04-13 01:06:49 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:49.041474 | orchestrator | 2026-04-13 01:06:49 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:49.042196 | orchestrator | 2026-04-13 01:06:49 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:49.044339 | orchestrator | 2026-04-13 01:06:49 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:49.044374 | orchestrator | 2026-04-13 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:52.104164 | orchestrator | 2026-04-13 01:06:52 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:52.104420 | orchestrator | 2026-04-13 01:06:52 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:52.105246 | orchestrator | 2026-04-13 01:06:52 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:52.105779 | orchestrator | 2026-04-13 01:06:52 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:52.106083 | orchestrator | 2026-04-13 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:55.139640 | orchestrator | 2026-04-13 01:06:55 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:55.139731 | orchestrator | 2026-04-13 01:06:55 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:55.141560 | orchestrator | 2026-04-13 01:06:55 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:55.141822 | orchestrator | 2026-04-13 01:06:55 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:55.142126 | orchestrator | 2026-04-13 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:58.164257 | orchestrator | 2026-04-13 01:06:58 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:06:58.164455 | orchestrator | 2026-04-13 01:06:58 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:06:58.166120 | orchestrator | 2026-04-13 01:06:58 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:06:58.166190 | orchestrator | 2026-04-13 01:06:58 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:06:58.166206 | orchestrator | 2026-04-13 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:01.191375 | orchestrator | 2026-04-13 01:07:01 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:01.192203 | orchestrator | 2026-04-13 01:07:01 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:01.192888 | orchestrator | 2026-04-13 01:07:01 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:01.193139 | orchestrator | 2026-04-13 01:07:01 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:07:01.193165 | orchestrator | 2026-04-13 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:04.219078 | orchestrator | 2026-04-13 01:07:04 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:04.220170 | orchestrator | 2026-04-13 01:07:04 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:04.221305 | orchestrator | 2026-04-13 01:07:04 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:04.222059 | orchestrator | 2026-04-13 01:07:04 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:07:04.222116 | orchestrator | 2026-04-13 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:07.247460 | orchestrator | 2026-04-13 01:07:07 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:07.247531 | orchestrator | 2026-04-13 01:07:07 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:07.248450 | orchestrator | 2026-04-13 01:07:07 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:07.250347 | orchestrator | 2026-04-13 01:07:07 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:07:07.250384 | orchestrator | 2026-04-13 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:10.276154 | orchestrator | 2026-04-13 01:07:10 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:10.276381 | orchestrator | 2026-04-13 01:07:10 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:10.280391 | orchestrator | 2026-04-13 01:07:10 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:10.281129 | orchestrator | 2026-04-13 01:07:10 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:07:10.281171 | orchestrator | 2026-04-13 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:13.303214 | orchestrator | 2026-04-13 01:07:13 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:13.305359 | orchestrator | 2026-04-13 01:07:13 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:13.305959 | orchestrator | 2026-04-13 01:07:13 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:13.306605 | orchestrator | 2026-04-13 01:07:13 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:07:13.306641 | orchestrator | 2026-04-13 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:16.341753 | orchestrator | 2026-04-13 01:07:16 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:16.342137 | orchestrator | 2026-04-13 01:07:16 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:16.342449 | orchestrator | 2026-04-13 01:07:16 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:16.343132 | orchestrator | 2026-04-13 01:07:16 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:07:16.343615 | orchestrator | 2026-04-13 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:19.372798 | orchestrator | 2026-04-13 01:07:19 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:19.373103 | orchestrator | 2026-04-13 01:07:19 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:19.374913 | orchestrator | 2026-04-13 01:07:19 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:19.375959 | orchestrator | 2026-04-13 01:07:19 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state STARTED 2026-04-13 01:07:19.376022 | orchestrator | 2026-04-13 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:22.405173 | orchestrator | 2026-04-13 01:07:22 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:22.405377 | orchestrator | 2026-04-13 01:07:22 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:22.406602 | orchestrator | 2026-04-13 01:07:22 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:22.407158 | orchestrator | 2026-04-13 01:07:22 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:22.408576 | orchestrator | 2026-04-13 01:07:22.408607 | orchestrator | 2026-04-13 01:07:22 | INFO  | Task 1681fa0b-3b28-4121-bf65-b776429683c6 is in state SUCCESS 2026-04-13 01:07:22.410150 | orchestrator | 2026-04-13 01:07:22.410187 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:07:22.410227 | orchestrator | 2026-04-13 01:07:22.410238 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:07:22.410248 | orchestrator | Monday 13 April 2026 01:05:16 +0000 (0:00:00.360) 0:00:00.360 ********** 2026-04-13 01:07:22.410259 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:07:22.410287 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:07:22.410298 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:07:22.410308 | orchestrator | 2026-04-13 01:07:22.410319 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:07:22.410329 | orchestrator | Monday 13 April 2026 01:05:17 +0000 (0:00:00.342) 0:00:00.703 ********** 2026-04-13 01:07:22.410339 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-13 01:07:22.410350 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-13 01:07:22.410360 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-13 01:07:22.410370 | orchestrator | 2026-04-13 01:07:22.410380 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-13 01:07:22.410390 | orchestrator | 2026-04-13 01:07:22.410400 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-13 01:07:22.410410 | orchestrator | Monday 13 April 2026 01:05:17 +0000 (0:00:00.320) 0:00:01.023 ********** 2026-04-13 01:07:22.410420 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:07:22.410431 | orchestrator | 2026-04-13 01:07:22.410441 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-13 01:07:22.410451 | orchestrator | Monday 13 April 2026 01:05:18 +0000 (0:00:00.570) 0:00:01.594 ********** 2026-04-13 01:07:22.410462 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-13 01:07:22.410472 | orchestrator | 2026-04-13 01:07:22.410482 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-13 01:07:22.410492 | orchestrator | Monday 13 April 2026 01:05:21 +0000 (0:00:03.946) 0:00:05.540 ********** 2026-04-13 01:07:22.410502 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-13 01:07:22.410524 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-13 01:07:22.410535 | orchestrator | 2026-04-13 01:07:22.410545 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-13 01:07:22.410555 | orchestrator | Monday 13 April 2026 01:05:28 +0000 (0:00:06.513) 0:00:12.053 ********** 2026-04-13 01:07:22.410565 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:07:22.410575 | orchestrator | 2026-04-13 01:07:22.410585 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-13 01:07:22.410595 | orchestrator | Monday 13 April 2026 01:05:31 +0000 (0:00:03.149) 0:00:15.203 ********** 2026-04-13 01:07:22.410605 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-13 01:07:22.410615 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:07:22.410625 | orchestrator | 2026-04-13 01:07:22.410636 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-13 01:07:22.410647 | orchestrator | Monday 13 April 2026 01:05:35 +0000 (0:00:03.848) 0:00:19.052 ********** 2026-04-13 01:07:22.410657 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:07:22.410667 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-13 01:07:22.410691 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-13 01:07:22.410702 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-13 01:07:22.410712 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-13 01:07:22.410722 | orchestrator | 2026-04-13 01:07:22.410733 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-13 01:07:22.410745 | orchestrator | Monday 13 April 2026 01:05:50 +0000 (0:00:15.506) 0:00:34.559 ********** 2026-04-13 01:07:22.410757 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-13 01:07:22.410768 | orchestrator | 2026-04-13 01:07:22.410780 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-13 01:07:22.410792 | orchestrator | Monday 13 April 2026 01:05:54 +0000 (0:00:03.923) 0:00:38.482 ********** 2026-04-13 01:07:22.410806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.410833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.410848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.410866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.410884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.410895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.410914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.410925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.410936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.410946 | orchestrator | 2026-04-13 01:07:22.410957 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-13 01:07:22.410989 | orchestrator | Monday 13 April 2026 01:05:57 +0000 (0:00:02.419) 0:00:40.902 ********** 2026-04-13 01:07:22.411002 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-13 01:07:22.411017 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-13 01:07:22.411027 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-13 01:07:22.411043 | orchestrator | 2026-04-13 01:07:22.411054 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-13 01:07:22.411063 | orchestrator | Monday 13 April 2026 01:05:58 +0000 (0:00:01.135) 0:00:42.038 ********** 2026-04-13 01:07:22.411074 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:07:22.411083 | orchestrator | 2026-04-13 01:07:22.411093 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-13 01:07:22.411103 | orchestrator | Monday 13 April 2026 01:05:58 +0000 (0:00:00.189) 0:00:42.227 ********** 2026-04-13 01:07:22.411113 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:07:22.411123 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:07:22.411133 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:07:22.411143 | orchestrator | 2026-04-13 01:07:22.411153 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-13 01:07:22.411163 | orchestrator | Monday 13 April 2026 01:05:59 +0000 (0:00:00.531) 0:00:42.758 ********** 2026-04-13 01:07:22.411173 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:07:22.411183 | orchestrator | 2026-04-13 01:07:22.411193 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-13 01:07:22.411202 | orchestrator | Monday 13 April 2026 01:06:00 +0000 (0:00:00.955) 0:00:43.714 ********** 2026-04-13 01:07:22.411213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.411231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.411243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.411263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.411274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.411284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.411295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.411311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.411323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.411338 | orchestrator | 2026-04-13 01:07:22.411349 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-13 01:07:22.411359 | orchestrator | Monday 13 April 2026 01:06:03 +0000 (0:00:03.576) 0:00:47.291 ********** 2026-04-13 01:07:22.411374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.411385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411406 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:07:22.411422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.411433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411460 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:07:22.411474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.411485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411506 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:07:22.411516 | orchestrator | 2026-04-13 01:07:22.411527 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-13 01:07:22.411537 | orchestrator | Monday 13 April 2026 01:06:05 +0000 (0:00:01.831) 0:00:49.122 ********** 2026-04-13 01:07:22.411552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.411569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411599 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:07:22.411609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.411620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411641 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:07:22.411658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.411674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.411699 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:07:22.411709 | orchestrator | 2026-04-13 01:07:22.411720 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-13 01:07:22.411730 | orchestrator | Monday 13 April 2026 01:06:06 +0000 (0:00:01.380) 0:00:50.503 ********** 2026-04-13 01:07:22.411740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.411960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412119 | orchestrator | 2026-04-13 01:07:22.412129 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-13 01:07:22.412140 | orchestrator | Monday 13 April 2026 01:06:11 +0000 (0:00:04.255) 0:00:54.759 ********** 2026-04-13 01:07:22.412150 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:07:22.412160 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:07:22.412170 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:07:22.412180 | orchestrator | 2026-04-13 01:07:22.412190 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-13 01:07:22.412200 | orchestrator | Monday 13 April 2026 01:06:13 +0000 (0:00:02.686) 0:00:57.446 ********** 2026-04-13 01:07:22.412210 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:07:22.412220 | orchestrator | 2026-04-13 01:07:22.412230 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-13 01:07:22.412240 | orchestrator | Monday 13 April 2026 01:06:15 +0000 (0:00:01.408) 0:00:58.854 ********** 2026-04-13 01:07:22.412250 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:07:22.412260 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:07:22.412270 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:07:22.412280 | orchestrator | 2026-04-13 01:07:22.412289 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-13 01:07:22.412300 | orchestrator | Monday 13 April 2026 01:06:17 +0000 (0:00:01.741) 0:01:00.596 ********** 2026-04-13 01:07:22.412315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412432 | orchestrator | 2026-04-13 01:07:22.412442 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-13 01:07:22.412452 | orchestrator | Monday 13 April 2026 01:06:26 +0000 (0:00:09.605) 0:01:10.201 ********** 2026-04-13 01:07:22.412468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.412480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.412494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.412505 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:07:22.412516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.412532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.412548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.412558 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:07:22.412569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-13 01:07:22.412586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.412598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:07:22.412610 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:07:22.412627 | orchestrator | 2026-04-13 01:07:22.412640 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-13 01:07:22.412651 | orchestrator | Monday 13 April 2026 01:06:27 +0000 (0:00:00.668) 0:01:10.870 ********** 2026-04-13 01:07:22.412663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-13 01:07:22.412709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:07:22.412842 | orchestrator | 2026-04-13 01:07:22.412854 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-13 01:07:22.412865 | orchestrator | Monday 13 April 2026 01:06:30 +0000 (0:00:03.335) 0:01:14.205 ********** 2026-04-13 01:07:22.412877 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:07:22.412889 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:07:22.412900 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:07:22.412912 | orchestrator | 2026-04-13 01:07:22.412924 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-13 01:07:22.412936 | orchestrator | Monday 13 April 2026 01:06:31 +0000 (0:00:00.678) 0:01:14.883 ********** 2026-04-13 01:07:22.412946 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:07:22.412956 | orchestrator | 2026-04-13 01:07:22.412966 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-13 01:07:22.413043 | orchestrator | Monday 13 April 2026 01:06:33 +0000 (0:00:01.885) 0:01:16.768 ********** 2026-04-13 01:07:22.413054 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:07:22.413064 | orchestrator | 2026-04-13 01:07:22.413074 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-13 01:07:22.413091 | orchestrator | Monday 13 April 2026 01:06:35 +0000 (0:00:01.884) 0:01:18.653 ********** 2026-04-13 01:07:22.413101 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:07:22.413111 | orchestrator | 2026-04-13 01:07:22.413122 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-13 01:07:22.413132 | orchestrator | Monday 13 April 2026 01:06:47 +0000 (0:00:11.944) 0:01:30.597 ********** 2026-04-13 01:07:22.413142 | orchestrator | 2026-04-13 01:07:22.413152 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-13 01:07:22.413162 | orchestrator | Monday 13 April 2026 01:06:47 +0000 (0:00:00.290) 0:01:30.888 ********** 2026-04-13 01:07:22.413172 | orchestrator | 2026-04-13 01:07:22.413182 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-13 01:07:22.413192 | orchestrator | Monday 13 April 2026 01:06:47 +0000 (0:00:00.073) 0:01:30.962 ********** 2026-04-13 01:07:22.413202 | orchestrator | 2026-04-13 01:07:22.413212 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-13 01:07:22.413222 | orchestrator | Monday 13 April 2026 01:06:47 +0000 (0:00:00.086) 0:01:31.048 ********** 2026-04-13 01:07:22.413232 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:07:22.413242 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:07:22.413252 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:07:22.413262 | orchestrator | 2026-04-13 01:07:22.413273 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-13 01:07:22.413283 | orchestrator | Monday 13 April 2026 01:06:56 +0000 (0:00:09.245) 0:01:40.294 ********** 2026-04-13 01:07:22.413293 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:07:22.413303 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:07:22.413313 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:07:22.413323 | orchestrator | 2026-04-13 01:07:22.413333 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-13 01:07:22.413343 | orchestrator | Monday 13 April 2026 01:07:09 +0000 (0:00:13.254) 0:01:53.549 ********** 2026-04-13 01:07:22.413353 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:07:22.413363 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:07:22.413373 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:07:22.413383 | orchestrator | 2026-04-13 01:07:22.413393 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:07:22.413404 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:07:22.413415 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-13 01:07:22.413426 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-13 01:07:22.413436 | orchestrator | 2026-04-13 01:07:22.413446 | orchestrator | 2026-04-13 01:07:22.413456 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:07:22.413466 | orchestrator | Monday 13 April 2026 01:07:19 +0000 (0:00:09.669) 0:02:03.218 ********** 2026-04-13 01:07:22.413476 | orchestrator | =============================================================================== 2026-04-13 01:07:22.413486 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.51s 2026-04-13 01:07:22.413502 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.25s 2026-04-13 01:07:22.413513 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.94s 2026-04-13 01:07:22.413523 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.67s 2026-04-13 01:07:22.413533 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.61s 2026-04-13 01:07:22.413543 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.25s 2026-04-13 01:07:22.413553 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.51s 2026-04-13 01:07:22.413569 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.26s 2026-04-13 01:07:22.413579 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.95s 2026-04-13 01:07:22.413587 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.92s 2026-04-13 01:07:22.413595 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.85s 2026-04-13 01:07:22.413603 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.58s 2026-04-13 01:07:22.413611 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.34s 2026-04-13 01:07:22.413619 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.15s 2026-04-13 01:07:22.413628 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.69s 2026-04-13 01:07:22.413636 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.42s 2026-04-13 01:07:22.413644 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 1.89s 2026-04-13 01:07:22.413652 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.88s 2026-04-13 01:07:22.413660 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.83s 2026-04-13 01:07:22.413669 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.74s 2026-04-13 01:07:22.413677 | orchestrator | 2026-04-13 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:25.435581 | orchestrator | 2026-04-13 01:07:25 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:25.435811 | orchestrator | 2026-04-13 01:07:25 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:25.436339 | orchestrator | 2026-04-13 01:07:25 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:25.437629 | orchestrator | 2026-04-13 01:07:25 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:25.437703 | orchestrator | 2026-04-13 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:28.456452 | orchestrator | 2026-04-13 01:07:28 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:28.456958 | orchestrator | 2026-04-13 01:07:28 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:28.458423 | orchestrator | 2026-04-13 01:07:28 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:28.459319 | orchestrator | 2026-04-13 01:07:28 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:28.459389 | orchestrator | 2026-04-13 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:31.499644 | orchestrator | 2026-04-13 01:07:31 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:31.499751 | orchestrator | 2026-04-13 01:07:31 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:31.500186 | orchestrator | 2026-04-13 01:07:31 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:31.501054 | orchestrator | 2026-04-13 01:07:31 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:31.501089 | orchestrator | 2026-04-13 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:34.531422 | orchestrator | 2026-04-13 01:07:34 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:34.531628 | orchestrator | 2026-04-13 01:07:34 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:34.532613 | orchestrator | 2026-04-13 01:07:34 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:34.533407 | orchestrator | 2026-04-13 01:07:34 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:34.533488 | orchestrator | 2026-04-13 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:37.552403 | orchestrator | 2026-04-13 01:07:37 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:37.552870 | orchestrator | 2026-04-13 01:07:37 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:37.553677 | orchestrator | 2026-04-13 01:07:37 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:37.555052 | orchestrator | 2026-04-13 01:07:37 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:37.555931 | orchestrator | 2026-04-13 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:40.582757 | orchestrator | 2026-04-13 01:07:40 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:40.583360 | orchestrator | 2026-04-13 01:07:40 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:40.583955 | orchestrator | 2026-04-13 01:07:40 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:40.584809 | orchestrator | 2026-04-13 01:07:40 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:40.584838 | orchestrator | 2026-04-13 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:43.615902 | orchestrator | 2026-04-13 01:07:43 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:43.616012 | orchestrator | 2026-04-13 01:07:43 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:43.616028 | orchestrator | 2026-04-13 01:07:43 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:43.616040 | orchestrator | 2026-04-13 01:07:43 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:43.616052 | orchestrator | 2026-04-13 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:46.679239 | orchestrator | 2026-04-13 01:07:46 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:46.685917 | orchestrator | 2026-04-13 01:07:46 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:46.688343 | orchestrator | 2026-04-13 01:07:46 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:46.690861 | orchestrator | 2026-04-13 01:07:46 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:46.691251 | orchestrator | 2026-04-13 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:49.735548 | orchestrator | 2026-04-13 01:07:49 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:49.738315 | orchestrator | 2026-04-13 01:07:49 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:49.740503 | orchestrator | 2026-04-13 01:07:49 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:49.742143 | orchestrator | 2026-04-13 01:07:49 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:49.742821 | orchestrator | 2026-04-13 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:52.795106 | orchestrator | 2026-04-13 01:07:52 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:52.796847 | orchestrator | 2026-04-13 01:07:52 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:52.798061 | orchestrator | 2026-04-13 01:07:52 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:52.799526 | orchestrator | 2026-04-13 01:07:52 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:52.799673 | orchestrator | 2026-04-13 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:55.832544 | orchestrator | 2026-04-13 01:07:55 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:55.832675 | orchestrator | 2026-04-13 01:07:55 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:55.836027 | orchestrator | 2026-04-13 01:07:55 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:55.836791 | orchestrator | 2026-04-13 01:07:55 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:55.836870 | orchestrator | 2026-04-13 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:58.875135 | orchestrator | 2026-04-13 01:07:58 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:07:58.875232 | orchestrator | 2026-04-13 01:07:58 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:07:58.875689 | orchestrator | 2026-04-13 01:07:58 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:07:58.878821 | orchestrator | 2026-04-13 01:07:58 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:07:58.878903 | orchestrator | 2026-04-13 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:01.921853 | orchestrator | 2026-04-13 01:08:01 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:01.922754 | orchestrator | 2026-04-13 01:08:01 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:01.923939 | orchestrator | 2026-04-13 01:08:01 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:01.925430 | orchestrator | 2026-04-13 01:08:01 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:01.925485 | orchestrator | 2026-04-13 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:04.962145 | orchestrator | 2026-04-13 01:08:04 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:04.968559 | orchestrator | 2026-04-13 01:08:04 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:04.969692 | orchestrator | 2026-04-13 01:08:04 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:04.972344 | orchestrator | 2026-04-13 01:08:04 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:04.972521 | orchestrator | 2026-04-13 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:08.018452 | orchestrator | 2026-04-13 01:08:08 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:08.020505 | orchestrator | 2026-04-13 01:08:08 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:08.021286 | orchestrator | 2026-04-13 01:08:08 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:08.023147 | orchestrator | 2026-04-13 01:08:08 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:08.023248 | orchestrator | 2026-04-13 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:11.057158 | orchestrator | 2026-04-13 01:08:11 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:11.057319 | orchestrator | 2026-04-13 01:08:11 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:11.058577 | orchestrator | 2026-04-13 01:08:11 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:11.059659 | orchestrator | 2026-04-13 01:08:11 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:11.059721 | orchestrator | 2026-04-13 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:14.085602 | orchestrator | 2026-04-13 01:08:14 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:14.089030 | orchestrator | 2026-04-13 01:08:14 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:14.090902 | orchestrator | 2026-04-13 01:08:14 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:14.090962 | orchestrator | 2026-04-13 01:08:14 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:14.090969 | orchestrator | 2026-04-13 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:17.132627 | orchestrator | 2026-04-13 01:08:17 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:17.132917 | orchestrator | 2026-04-13 01:08:17 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:17.133690 | orchestrator | 2026-04-13 01:08:17 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:17.134290 | orchestrator | 2026-04-13 01:08:17 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:17.134328 | orchestrator | 2026-04-13 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:20.182228 | orchestrator | 2026-04-13 01:08:20 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:20.182477 | orchestrator | 2026-04-13 01:08:20 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:20.183324 | orchestrator | 2026-04-13 01:08:20 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:20.184360 | orchestrator | 2026-04-13 01:08:20 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:20.184443 | orchestrator | 2026-04-13 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:23.247516 | orchestrator | 2026-04-13 01:08:23 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:23.253221 | orchestrator | 2026-04-13 01:08:23 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:23.256268 | orchestrator | 2026-04-13 01:08:23 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:23.257169 | orchestrator | 2026-04-13 01:08:23 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state STARTED 2026-04-13 01:08:23.257215 | orchestrator | 2026-04-13 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:26.307490 | orchestrator | 2026-04-13 01:08:26 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:26.308018 | orchestrator | 2026-04-13 01:08:26 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:26.308844 | orchestrator | 2026-04-13 01:08:26 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:26.309461 | orchestrator | 2026-04-13 01:08:26 | INFO  | Task 827c0b21-618e-4de9-af72-4f4e4c721efa is in state SUCCESS 2026-04-13 01:08:26.310396 | orchestrator | 2026-04-13 01:08:26 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:26.311572 | orchestrator | 2026-04-13 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:29.342583 | orchestrator | 2026-04-13 01:08:29 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:29.345028 | orchestrator | 2026-04-13 01:08:29 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:29.345520 | orchestrator | 2026-04-13 01:08:29 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:29.346743 | orchestrator | 2026-04-13 01:08:29 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:29.346798 | orchestrator | 2026-04-13 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:32.388329 | orchestrator | 2026-04-13 01:08:32 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:32.388877 | orchestrator | 2026-04-13 01:08:32 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:32.389833 | orchestrator | 2026-04-13 01:08:32 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:32.390625 | orchestrator | 2026-04-13 01:08:32 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:32.390662 | orchestrator | 2026-04-13 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:35.419885 | orchestrator | 2026-04-13 01:08:35 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:35.420359 | orchestrator | 2026-04-13 01:08:35 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:35.421322 | orchestrator | 2026-04-13 01:08:35 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:35.421884 | orchestrator | 2026-04-13 01:08:35 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:35.421954 | orchestrator | 2026-04-13 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:38.443680 | orchestrator | 2026-04-13 01:08:38 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:38.444263 | orchestrator | 2026-04-13 01:08:38 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:38.446411 | orchestrator | 2026-04-13 01:08:38 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:38.447374 | orchestrator | 2026-04-13 01:08:38 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:38.447449 | orchestrator | 2026-04-13 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:41.475215 | orchestrator | 2026-04-13 01:08:41 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:41.476134 | orchestrator | 2026-04-13 01:08:41 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:41.477770 | orchestrator | 2026-04-13 01:08:41 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:41.478836 | orchestrator | 2026-04-13 01:08:41 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:41.478945 | orchestrator | 2026-04-13 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:44.519282 | orchestrator | 2026-04-13 01:08:44 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:44.521776 | orchestrator | 2026-04-13 01:08:44 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:44.522740 | orchestrator | 2026-04-13 01:08:44 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:44.524246 | orchestrator | 2026-04-13 01:08:44 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:44.524546 | orchestrator | 2026-04-13 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:47.558659 | orchestrator | 2026-04-13 01:08:47 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:47.558973 | orchestrator | 2026-04-13 01:08:47 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:47.563774 | orchestrator | 2026-04-13 01:08:47 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:47.566463 | orchestrator | 2026-04-13 01:08:47 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:47.566666 | orchestrator | 2026-04-13 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:50.609284 | orchestrator | 2026-04-13 01:08:50 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:50.611091 | orchestrator | 2026-04-13 01:08:50 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:50.614681 | orchestrator | 2026-04-13 01:08:50 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:50.619223 | orchestrator | 2026-04-13 01:08:50 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:50.619431 | orchestrator | 2026-04-13 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:53.664448 | orchestrator | 2026-04-13 01:08:53 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:53.666519 | orchestrator | 2026-04-13 01:08:53 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:53.668156 | orchestrator | 2026-04-13 01:08:53 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:53.670403 | orchestrator | 2026-04-13 01:08:53 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:53.670446 | orchestrator | 2026-04-13 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:56.703472 | orchestrator | 2026-04-13 01:08:56 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:56.703890 | orchestrator | 2026-04-13 01:08:56 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:56.705400 | orchestrator | 2026-04-13 01:08:56 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:56.708831 | orchestrator | 2026-04-13 01:08:56 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:56.708902 | orchestrator | 2026-04-13 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:59.754107 | orchestrator | 2026-04-13 01:08:59 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:08:59.754341 | orchestrator | 2026-04-13 01:08:59 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:08:59.756366 | orchestrator | 2026-04-13 01:08:59 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:08:59.756901 | orchestrator | 2026-04-13 01:08:59 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:08:59.757032 | orchestrator | 2026-04-13 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:02.803091 | orchestrator | 2026-04-13 01:09:02 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:02.804677 | orchestrator | 2026-04-13 01:09:02 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:09:02.806970 | orchestrator | 2026-04-13 01:09:02 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:02.810746 | orchestrator | 2026-04-13 01:09:02 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:02.811166 | orchestrator | 2026-04-13 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:05.863007 | orchestrator | 2026-04-13 01:09:05 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:05.866108 | orchestrator | 2026-04-13 01:09:05 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:09:05.867871 | orchestrator | 2026-04-13 01:09:05 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:05.870095 | orchestrator | 2026-04-13 01:09:05 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:05.870315 | orchestrator | 2026-04-13 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:08.902935 | orchestrator | 2026-04-13 01:09:08 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:08.903389 | orchestrator | 2026-04-13 01:09:08 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:09:08.904390 | orchestrator | 2026-04-13 01:09:08 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:08.905324 | orchestrator | 2026-04-13 01:09:08 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:08.905551 | orchestrator | 2026-04-13 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:11.944482 | orchestrator | 2026-04-13 01:09:11 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:11.945185 | orchestrator | 2026-04-13 01:09:11 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:09:11.946218 | orchestrator | 2026-04-13 01:09:11 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:11.947340 | orchestrator | 2026-04-13 01:09:11 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:11.947463 | orchestrator | 2026-04-13 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:14.978547 | orchestrator | 2026-04-13 01:09:14 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:14.978986 | orchestrator | 2026-04-13 01:09:14 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state STARTED 2026-04-13 01:09:14.979643 | orchestrator | 2026-04-13 01:09:14 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:14.980345 | orchestrator | 2026-04-13 01:09:14 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:14.980475 | orchestrator | 2026-04-13 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:18.013705 | orchestrator | 2026-04-13 01:09:18 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:18.016205 | orchestrator | 2026-04-13 01:09:18 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:18.022273 | orchestrator | 2026-04-13 01:09:18.022344 | orchestrator | 2026-04-13 01:09:18.022359 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-13 01:09:18.022372 | orchestrator | 2026-04-13 01:09:18.022386 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-13 01:09:18.022407 | orchestrator | Monday 13 April 2026 01:07:26 +0000 (0:00:00.196) 0:00:00.196 ********** 2026-04-13 01:09:18.022427 | orchestrator | changed: [localhost] 2026-04-13 01:09:18.022447 | orchestrator | 2026-04-13 01:09:18.022466 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-13 01:09:18.022485 | orchestrator | Monday 13 April 2026 01:07:27 +0000 (0:00:01.322) 0:00:01.519 ********** 2026-04-13 01:09:18.022504 | orchestrator | changed: [localhost] 2026-04-13 01:09:18.022522 | orchestrator | 2026-04-13 01:09:18.022541 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-13 01:09:18.022560 | orchestrator | Monday 13 April 2026 01:08:16 +0000 (0:00:48.678) 0:00:50.198 ********** 2026-04-13 01:09:18.022578 | orchestrator | changed: [localhost] 2026-04-13 01:09:18.022596 | orchestrator | 2026-04-13 01:09:18.022615 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:09:18.022635 | orchestrator | 2026-04-13 01:09:18.022655 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:09:18.022696 | orchestrator | Monday 13 April 2026 01:08:21 +0000 (0:00:05.503) 0:00:55.701 ********** 2026-04-13 01:09:18.022709 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:09:18.022721 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:09:18.022741 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:09:18.022752 | orchestrator | 2026-04-13 01:09:18.022764 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:09:18.022775 | orchestrator | Monday 13 April 2026 01:08:22 +0000 (0:00:00.707) 0:00:56.408 ********** 2026-04-13 01:09:18.022786 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-13 01:09:18.022798 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-13 01:09:18.022809 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-13 01:09:18.022821 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-13 01:09:18.022832 | orchestrator | 2026-04-13 01:09:18.022843 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-13 01:09:18.022855 | orchestrator | skipping: no hosts matched 2026-04-13 01:09:18.022869 | orchestrator | 2026-04-13 01:09:18.022882 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:09:18.022944 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:09:18.022960 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:09:18.022975 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:09:18.022988 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:09:18.023002 | orchestrator | 2026-04-13 01:09:18.023015 | orchestrator | 2026-04-13 01:09:18.023028 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:09:18.023042 | orchestrator | Monday 13 April 2026 01:08:23 +0000 (0:00:01.268) 0:00:57.676 ********** 2026-04-13 01:09:18.023056 | orchestrator | =============================================================================== 2026-04-13 01:09:18.023069 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 48.68s 2026-04-13 01:09:18.023082 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.50s 2026-04-13 01:09:18.023095 | orchestrator | Ensure the destination directory exists --------------------------------- 1.32s 2026-04-13 01:09:18.023132 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2026-04-13 01:09:18.023148 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-04-13 01:09:18.023169 | orchestrator | 2026-04-13 01:09:18.023242 | orchestrator | 2026-04-13 01:09:18 | INFO  | Task ae5a1de1-08d6-4a7e-9a81-a788c330b18c is in state SUCCESS 2026-04-13 01:09:18.024560 | orchestrator | 2026-04-13 01:09:18.024603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:09:18.024613 | orchestrator | 2026-04-13 01:09:18.024620 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:09:18.024627 | orchestrator | Monday 13 April 2026 01:05:58 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-04-13 01:09:18.024633 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:09:18.024641 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:09:18.024647 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:09:18.024654 | orchestrator | 2026-04-13 01:09:18.024661 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:09:18.024667 | orchestrator | Monday 13 April 2026 01:05:58 +0000 (0:00:00.365) 0:00:00.658 ********** 2026-04-13 01:09:18.024675 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-13 01:09:18.024681 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-13 01:09:18.024688 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-13 01:09:18.024694 | orchestrator | 2026-04-13 01:09:18.024701 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-13 01:09:18.024707 | orchestrator | 2026-04-13 01:09:18.024714 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-13 01:09:18.024720 | orchestrator | Monday 13 April 2026 01:05:58 +0000 (0:00:00.532) 0:00:01.191 ********** 2026-04-13 01:09:18.024749 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:09:18.024757 | orchestrator | 2026-04-13 01:09:18.024763 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-13 01:09:18.024770 | orchestrator | Monday 13 April 2026 01:06:00 +0000 (0:00:01.106) 0:00:02.298 ********** 2026-04-13 01:09:18.024776 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-13 01:09:18.024783 | orchestrator | 2026-04-13 01:09:18.024789 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-13 01:09:18.024796 | orchestrator | Monday 13 April 2026 01:06:04 +0000 (0:00:04.001) 0:00:06.299 ********** 2026-04-13 01:09:18.024802 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-13 01:09:18.024828 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-13 01:09:18.024836 | orchestrator | 2026-04-13 01:09:18.024843 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-13 01:09:18.024850 | orchestrator | Monday 13 April 2026 01:06:11 +0000 (0:00:06.942) 0:00:13.242 ********** 2026-04-13 01:09:18.024857 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:09:18.024863 | orchestrator | 2026-04-13 01:09:18.024870 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-13 01:09:18.024877 | orchestrator | Monday 13 April 2026 01:06:14 +0000 (0:00:03.570) 0:00:16.812 ********** 2026-04-13 01:09:18.024883 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-13 01:09:18.024890 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:09:18.024929 | orchestrator | 2026-04-13 01:09:18.024936 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-13 01:09:18.024943 | orchestrator | Monday 13 April 2026 01:06:18 +0000 (0:00:03.870) 0:00:20.683 ********** 2026-04-13 01:09:18.024950 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:09:18.024956 | orchestrator | 2026-04-13 01:09:18.024963 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-13 01:09:18.025015 | orchestrator | Monday 13 April 2026 01:06:21 +0000 (0:00:03.050) 0:00:23.734 ********** 2026-04-13 01:09:18.025049 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-13 01:09:18.025056 | orchestrator | 2026-04-13 01:09:18.025062 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-13 01:09:18.025069 | orchestrator | Monday 13 April 2026 01:06:25 +0000 (0:00:03.974) 0:00:27.708 ********** 2026-04-13 01:09:18.025078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.025106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.025122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.025129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025267 | orchestrator | 2026-04-13 01:09:18.025275 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-13 01:09:18.025282 | orchestrator | Monday 13 April 2026 01:06:30 +0000 (0:00:04.777) 0:00:32.486 ********** 2026-04-13 01:09:18.025290 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:18.025298 | orchestrator | 2026-04-13 01:09:18.025305 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-13 01:09:18.025334 | orchestrator | Monday 13 April 2026 01:06:30 +0000 (0:00:00.261) 0:00:32.748 ********** 2026-04-13 01:09:18.025341 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:18.025360 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:18.025368 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:18.025376 | orchestrator | 2026-04-13 01:09:18.025383 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-13 01:09:18.025391 | orchestrator | Monday 13 April 2026 01:06:31 +0000 (0:00:00.604) 0:00:33.352 ********** 2026-04-13 01:09:18.025399 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:09:18.025406 | orchestrator | 2026-04-13 01:09:18.025414 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-13 01:09:18.025422 | orchestrator | Monday 13 April 2026 01:06:31 +0000 (0:00:00.775) 0:00:34.128 ********** 2026-04-13 01:09:18.025430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.025445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.025453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.025464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.025596 | orchestrator | 2026-04-13 01:09:18.025602 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-13 01:09:18.025609 | orchestrator | Monday 13 April 2026 01:06:38 +0000 (0:00:06.834) 0:00:40.963 ********** 2026-04-13 01:09:18.025627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.025636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.025648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.025665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.025678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.025691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.025708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.025721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026141 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:18.026148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026155 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:18.026162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.026169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.026185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026217 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:18.026224 | orchestrator | 2026-04-13 01:09:18.026230 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-13 01:09:18.026237 | orchestrator | Monday 13 April 2026 01:06:39 +0000 (0:00:00.901) 0:00:41.865 ********** 2026-04-13 01:09:18.026244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.026251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.026263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026295 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:18.026302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.026309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.026322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026354 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:18.026360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.026367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.026374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026412 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:18.026418 | orchestrator | 2026-04-13 01:09:18.026425 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-13 01:09:18.026431 | orchestrator | Monday 13 April 2026 01:06:41 +0000 (0:00:01.597) 0:00:43.463 ********** 2026-04-13 01:09:18.026438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.026445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.026465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.026473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026598 | orchestrator | 2026-04-13 01:09:18.026617 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-13 01:09:18.026623 | orchestrator | Monday 13 April 2026 01:06:48 +0000 (0:00:07.095) 0:00:50.558 ********** 2026-04-13 01:09:18.026630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.026638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.026648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.026662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026800 | orchestrator | 2026-04-13 01:09:18.026808 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-13 01:09:18.026815 | orchestrator | Monday 13 April 2026 01:07:07 +0000 (0:00:18.764) 0:01:09.322 ********** 2026-04-13 01:09:18.026823 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-13 01:09:18.026830 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-13 01:09:18.026838 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-13 01:09:18.026846 | orchestrator | 2026-04-13 01:09:18.026853 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-13 01:09:18.026861 | orchestrator | Monday 13 April 2026 01:07:12 +0000 (0:00:05.838) 0:01:15.161 ********** 2026-04-13 01:09:18.026868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-13 01:09:18.026876 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-13 01:09:18.026884 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-13 01:09:18.026891 | orchestrator | 2026-04-13 01:09:18.026916 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-13 01:09:18.026924 | orchestrator | Monday 13 April 2026 01:07:16 +0000 (0:00:03.214) 0:01:18.376 ********** 2026-04-13 01:09:18.026936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.026944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.026961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.026970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.026978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.026997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027135 | orchestrator | 2026-04-13 01:09:18.027141 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-13 01:09:18.027152 | orchestrator | Monday 13 April 2026 01:07:19 +0000 (0:00:03.720) 0:01:22.096 ********** 2026-04-13 01:09:18.027159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.027167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.027174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.027187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027480 | orchestrator | 2026-04-13 01:09:18.027486 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-13 01:09:18.027493 | orchestrator | Monday 13 April 2026 01:07:23 +0000 (0:00:03.831) 0:01:25.927 ********** 2026-04-13 01:09:18.027500 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:18.027507 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:18.027514 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:18.027520 | orchestrator | 2026-04-13 01:09:18.027527 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-13 01:09:18.027533 | orchestrator | Monday 13 April 2026 01:07:24 +0000 (0:00:00.507) 0:01:26.435 ********** 2026-04-13 01:09:18.027540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.027547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.027554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027594 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:18.027601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.027608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.027615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027656 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:18.027662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-13 01:09:18.027669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 01:09:18.027677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:09:18.027716 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:18.027723 | orchestrator | 2026-04-13 01:09:18.027730 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-13 01:09:18.027736 | orchestrator | Monday 13 April 2026 01:07:25 +0000 (0:00:01.545) 0:01:27.981 ********** 2026-04-13 01:09:18.027743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.027750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.027757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-13 01:09:18.027782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:09:18.027938 | orchestrator | 2026-04-13 01:09:18.027952 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-13 01:09:18.027970 | orchestrator | Monday 13 April 2026 01:07:31 +0000 (0:00:05.774) 0:01:33.755 ********** 2026-04-13 01:09:18.027980 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:18.027990 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:18.028000 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:18.028010 | orchestrator | 2026-04-13 01:09:18.028020 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-13 01:09:18.028032 | orchestrator | Monday 13 April 2026 01:07:32 +0000 (0:00:00.868) 0:01:34.624 ********** 2026-04-13 01:09:18.028043 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-13 01:09:18.028052 | orchestrator | 2026-04-13 01:09:18.028063 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-13 01:09:18.028074 | orchestrator | Monday 13 April 2026 01:07:34 +0000 (0:00:02.019) 0:01:36.643 ********** 2026-04-13 01:09:18.028085 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 01:09:18.028096 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-13 01:09:18.028107 | orchestrator | 2026-04-13 01:09:18.028118 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-13 01:09:18.028127 | orchestrator | Monday 13 April 2026 01:07:36 +0000 (0:00:02.318) 0:01:38.961 ********** 2026-04-13 01:09:18.028135 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028142 | orchestrator | 2026-04-13 01:09:18.028149 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-13 01:09:18.028156 | orchestrator | Monday 13 April 2026 01:07:53 +0000 (0:00:16.661) 0:01:55.623 ********** 2026-04-13 01:09:18.028163 | orchestrator | 2026-04-13 01:09:18.028171 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-13 01:09:18.028178 | orchestrator | Monday 13 April 2026 01:07:53 +0000 (0:00:00.079) 0:01:55.702 ********** 2026-04-13 01:09:18.028185 | orchestrator | 2026-04-13 01:09:18.028192 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-13 01:09:18.028199 | orchestrator | Monday 13 April 2026 01:07:53 +0000 (0:00:00.081) 0:01:55.783 ********** 2026-04-13 01:09:18.028206 | orchestrator | 2026-04-13 01:09:18.028213 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-13 01:09:18.028220 | orchestrator | Monday 13 April 2026 01:07:53 +0000 (0:00:00.106) 0:01:55.889 ********** 2026-04-13 01:09:18.028228 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028234 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:18.028241 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:18.028248 | orchestrator | 2026-04-13 01:09:18.028255 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-13 01:09:18.028262 | orchestrator | Monday 13 April 2026 01:08:06 +0000 (0:00:13.137) 0:02:09.027 ********** 2026-04-13 01:09:18.028270 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028277 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:18.028290 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:18.028297 | orchestrator | 2026-04-13 01:09:18.028304 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-13 01:09:18.028311 | orchestrator | Monday 13 April 2026 01:08:19 +0000 (0:00:12.480) 0:02:21.508 ********** 2026-04-13 01:09:18.028318 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028326 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:18.028333 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:18.028340 | orchestrator | 2026-04-13 01:09:18.028347 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-13 01:09:18.028354 | orchestrator | Monday 13 April 2026 01:08:28 +0000 (0:00:09.248) 0:02:30.757 ********** 2026-04-13 01:09:18.028362 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:18.028369 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028376 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:18.028383 | orchestrator | 2026-04-13 01:09:18.028390 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-13 01:09:18.028397 | orchestrator | Monday 13 April 2026 01:08:42 +0000 (0:00:13.956) 0:02:44.713 ********** 2026-04-13 01:09:18.028404 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028412 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:18.028419 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:18.028426 | orchestrator | 2026-04-13 01:09:18.028433 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-13 01:09:18.028440 | orchestrator | Monday 13 April 2026 01:08:53 +0000 (0:00:11.429) 0:02:56.143 ********** 2026-04-13 01:09:18.028446 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028453 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:18.028460 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:18.028467 | orchestrator | 2026-04-13 01:09:18.028474 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-13 01:09:18.028481 | orchestrator | Monday 13 April 2026 01:09:07 +0000 (0:00:13.521) 0:03:09.664 ********** 2026-04-13 01:09:18.028488 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:18.028495 | orchestrator | 2026-04-13 01:09:18.028502 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:09:18.028510 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:09:18.028518 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-13 01:09:18.028525 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-13 01:09:18.028532 | orchestrator | 2026-04-13 01:09:18.028539 | orchestrator | 2026-04-13 01:09:18.028555 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:09:18.028563 | orchestrator | Monday 13 April 2026 01:09:14 +0000 (0:00:07.287) 0:03:16.951 ********** 2026-04-13 01:09:18.028570 | orchestrator | =============================================================================== 2026-04-13 01:09:18.028578 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.76s 2026-04-13 01:09:18.028585 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.66s 2026-04-13 01:09:18.028592 | orchestrator | designate : Restart designate-producer container ----------------------- 13.96s 2026-04-13 01:09:18.028599 | orchestrator | designate : Restart designate-worker container ------------------------- 13.52s 2026-04-13 01:09:18.028606 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.14s 2026-04-13 01:09:18.028613 | orchestrator | designate : Restart designate-api container ---------------------------- 12.48s 2026-04-13 01:09:18.028620 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.43s 2026-04-13 01:09:18.028627 | orchestrator | designate : Restart designate-central container ------------------------- 9.25s 2026-04-13 01:09:18.028638 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.29s 2026-04-13 01:09:18.028646 | orchestrator | designate : Copying over config.json files for services ----------------- 7.10s 2026-04-13 01:09:18.028653 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.94s 2026-04-13 01:09:18.028660 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.83s 2026-04-13 01:09:18.028668 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.84s 2026-04-13 01:09:18.028675 | orchestrator | designate : Check designate containers ---------------------------------- 5.77s 2026-04-13 01:09:18.028682 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.78s 2026-04-13 01:09:18.028689 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.00s 2026-04-13 01:09:18.028696 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.97s 2026-04-13 01:09:18.028703 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.87s 2026-04-13 01:09:18.028710 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.83s 2026-04-13 01:09:18.028718 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.72s 2026-04-13 01:09:18.028792 | orchestrator | 2026-04-13 01:09:18 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:18.028838 | orchestrator | 2026-04-13 01:09:18 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:18.029559 | orchestrator | 2026-04-13 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:21.082798 | orchestrator | 2026-04-13 01:09:21 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:21.087742 | orchestrator | 2026-04-13 01:09:21 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:21.088075 | orchestrator | 2026-04-13 01:09:21 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:21.089827 | orchestrator | 2026-04-13 01:09:21 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:21.090840 | orchestrator | 2026-04-13 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:24.151868 | orchestrator | 2026-04-13 01:09:24 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:24.152007 | orchestrator | 2026-04-13 01:09:24 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:24.152024 | orchestrator | 2026-04-13 01:09:24 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:24.153746 | orchestrator | 2026-04-13 01:09:24 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:24.153852 | orchestrator | 2026-04-13 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:27.196906 | orchestrator | 2026-04-13 01:09:27 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:27.199916 | orchestrator | 2026-04-13 01:09:27 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:27.202123 | orchestrator | 2026-04-13 01:09:27 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:27.204787 | orchestrator | 2026-04-13 01:09:27 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:27.204988 | orchestrator | 2026-04-13 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:30.247338 | orchestrator | 2026-04-13 01:09:30 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:30.249171 | orchestrator | 2026-04-13 01:09:30 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:30.252619 | orchestrator | 2026-04-13 01:09:30 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:30.254766 | orchestrator | 2026-04-13 01:09:30 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:30.254962 | orchestrator | 2026-04-13 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:33.297122 | orchestrator | 2026-04-13 01:09:33 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:33.297208 | orchestrator | 2026-04-13 01:09:33 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:33.297537 | orchestrator | 2026-04-13 01:09:33 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:33.298293 | orchestrator | 2026-04-13 01:09:33 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:33.298325 | orchestrator | 2026-04-13 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:36.334575 | orchestrator | 2026-04-13 01:09:36 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:36.337174 | orchestrator | 2026-04-13 01:09:36 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:36.337701 | orchestrator | 2026-04-13 01:09:36 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:36.338652 | orchestrator | 2026-04-13 01:09:36 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:36.338684 | orchestrator | 2026-04-13 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:39.388275 | orchestrator | 2026-04-13 01:09:39 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:39.389039 | orchestrator | 2026-04-13 01:09:39 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:39.391633 | orchestrator | 2026-04-13 01:09:39 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:39.393195 | orchestrator | 2026-04-13 01:09:39 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state STARTED 2026-04-13 01:09:39.393229 | orchestrator | 2026-04-13 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:42.441348 | orchestrator | 2026-04-13 01:09:42 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:09:42.443965 | orchestrator | 2026-04-13 01:09:42 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:42.446570 | orchestrator | 2026-04-13 01:09:42 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:42.448222 | orchestrator | 2026-04-13 01:09:42 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:42.450126 | orchestrator | 2026-04-13 01:09:42 | INFO  | Task 0f02fec3-6bbb-42d6-bcf9-239e2494a9bc is in state SUCCESS 2026-04-13 01:09:42.450243 | orchestrator | 2026-04-13 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:42.451729 | orchestrator | 2026-04-13 01:09:42.451751 | orchestrator | 2026-04-13 01:09:42.451759 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:09:42.451765 | orchestrator | 2026-04-13 01:09:42.451772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:09:42.451779 | orchestrator | Monday 13 April 2026 01:08:28 +0000 (0:00:00.672) 0:00:00.672 ********** 2026-04-13 01:09:42.451786 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:09:42.451793 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:09:42.451818 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:09:42.451824 | orchestrator | 2026-04-13 01:09:42.451831 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:09:42.451837 | orchestrator | Monday 13 April 2026 01:08:29 +0000 (0:00:00.812) 0:00:01.485 ********** 2026-04-13 01:09:42.451844 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-13 01:09:42.451851 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-13 01:09:42.451857 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-13 01:09:42.451863 | orchestrator | 2026-04-13 01:09:42.451870 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-13 01:09:42.451898 | orchestrator | 2026-04-13 01:09:42.451904 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-13 01:09:42.451910 | orchestrator | Monday 13 April 2026 01:08:30 +0000 (0:00:00.887) 0:00:02.372 ********** 2026-04-13 01:09:42.451916 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:09:42.451923 | orchestrator | 2026-04-13 01:09:42.451928 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-13 01:09:42.451934 | orchestrator | Monday 13 April 2026 01:08:32 +0000 (0:00:01.999) 0:00:04.372 ********** 2026-04-13 01:09:42.451939 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-13 01:09:42.451945 | orchestrator | 2026-04-13 01:09:42.451961 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-13 01:09:42.451967 | orchestrator | Monday 13 April 2026 01:08:36 +0000 (0:00:03.877) 0:00:08.250 ********** 2026-04-13 01:09:42.451972 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-13 01:09:42.451978 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-13 01:09:42.451983 | orchestrator | 2026-04-13 01:09:42.451988 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-13 01:09:42.451994 | orchestrator | Monday 13 April 2026 01:08:42 +0000 (0:00:06.049) 0:00:14.299 ********** 2026-04-13 01:09:42.451999 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:09:42.452005 | orchestrator | 2026-04-13 01:09:42.452010 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-13 01:09:42.452015 | orchestrator | Monday 13 April 2026 01:08:45 +0000 (0:00:03.100) 0:00:17.400 ********** 2026-04-13 01:09:42.452021 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-13 01:09:42.452026 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:09:42.452031 | orchestrator | 2026-04-13 01:09:42.452037 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-13 01:09:42.452042 | orchestrator | Monday 13 April 2026 01:08:49 +0000 (0:00:04.095) 0:00:21.495 ********** 2026-04-13 01:09:42.452047 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:09:42.452053 | orchestrator | 2026-04-13 01:09:42.452058 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-13 01:09:42.452063 | orchestrator | Monday 13 April 2026 01:08:52 +0000 (0:00:03.391) 0:00:24.888 ********** 2026-04-13 01:09:42.452069 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-13 01:09:42.452074 | orchestrator | 2026-04-13 01:09:42.452079 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-13 01:09:42.452085 | orchestrator | Monday 13 April 2026 01:08:57 +0000 (0:00:04.111) 0:00:28.999 ********** 2026-04-13 01:09:42.452090 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:42.452095 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:42.452101 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:42.452106 | orchestrator | 2026-04-13 01:09:42.452111 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-13 01:09:42.452122 | orchestrator | Monday 13 April 2026 01:08:57 +0000 (0:00:00.303) 0:00:29.303 ********** 2026-04-13 01:09:42.452130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452164 | orchestrator | 2026-04-13 01:09:42.452169 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-13 01:09:42.452175 | orchestrator | Monday 13 April 2026 01:09:00 +0000 (0:00:02.984) 0:00:32.288 ********** 2026-04-13 01:09:42.452180 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:42.452185 | orchestrator | 2026-04-13 01:09:42.452191 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-13 01:09:42.452196 | orchestrator | Monday 13 April 2026 01:09:00 +0000 (0:00:00.179) 0:00:32.467 ********** 2026-04-13 01:09:42.452201 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:42.452207 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:42.452212 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:42.452217 | orchestrator | 2026-04-13 01:09:42.452223 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-13 01:09:42.452228 | orchestrator | Monday 13 April 2026 01:09:00 +0000 (0:00:00.272) 0:00:32.739 ********** 2026-04-13 01:09:42.452234 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:09:42.452239 | orchestrator | 2026-04-13 01:09:42.452245 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-13 01:09:42.452253 | orchestrator | Monday 13 April 2026 01:09:01 +0000 (0:00:00.776) 0:00:33.516 ********** 2026-04-13 01:09:42.452258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452282 | orchestrator | 2026-04-13 01:09:42.452287 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-13 01:09:42.452296 | orchestrator | Monday 13 April 2026 01:09:03 +0000 (0:00:01.484) 0:00:35.000 ********** 2026-04-13 01:09:42.452301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452311 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:42.452316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452322 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:42.452332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452338 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:42.452344 | orchestrator | 2026-04-13 01:09:42.452349 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-13 01:09:42.452355 | orchestrator | Monday 13 April 2026 01:09:03 +0000 (0:00:00.508) 0:00:35.509 ********** 2026-04-13 01:09:42.452360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452366 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:42.452374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452383 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:42.452389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452395 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:42.452400 | orchestrator | 2026-04-13 01:09:42.452405 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-13 01:09:42.452411 | orchestrator | Monday 13 April 2026 01:09:04 +0000 (0:00:00.696) 0:00:36.206 ********** 2026-04-13 01:09:42.452419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452443 | orchestrator | 2026-04-13 01:09:42.452449 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-13 01:09:42.452454 | orchestrator | Monday 13 April 2026 01:09:05 +0000 (0:00:01.683) 0:00:37.889 ********** 2026-04-13 01:09:42.452460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452482 | orchestrator | 2026-04-13 01:09:42.452487 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-13 01:09:42.452492 | orchestrator | Monday 13 April 2026 01:09:08 +0000 (0:00:02.333) 0:00:40.223 ********** 2026-04-13 01:09:42.452498 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-13 01:09:42.452503 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-13 01:09:42.452509 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-13 01:09:42.452514 | orchestrator | 2026-04-13 01:09:42.452519 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-13 01:09:42.452525 | orchestrator | Monday 13 April 2026 01:09:10 +0000 (0:00:02.016) 0:00:42.240 ********** 2026-04-13 01:09:42.452534 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:42.452542 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:42.452548 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:42.452553 | orchestrator | 2026-04-13 01:09:42.452558 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-13 01:09:42.452564 | orchestrator | Monday 13 April 2026 01:09:12 +0000 (0:00:02.260) 0:00:44.500 ********** 2026-04-13 01:09:42.452569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452575 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:09:42.452580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452587 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:09:42.452596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-13 01:09:42.452602 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:09:42.452608 | orchestrator | 2026-04-13 01:09:42.452613 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-13 01:09:42.452618 | orchestrator | Monday 13 April 2026 01:09:13 +0000 (0:00:01.449) 0:00:45.950 ********** 2026-04-13 01:09:42.452624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-13 01:09:42.452653 | orchestrator | 2026-04-13 01:09:42.452658 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-13 01:09:42.452664 | orchestrator | Monday 13 April 2026 01:09:15 +0000 (0:00:01.624) 0:00:47.574 ********** 2026-04-13 01:09:42.452669 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:42.452674 | orchestrator | 2026-04-13 01:09:42.452679 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-13 01:09:42.452685 | orchestrator | Monday 13 April 2026 01:09:17 +0000 (0:00:01.796) 0:00:49.371 ********** 2026-04-13 01:09:42.452690 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:42.452695 | orchestrator | 2026-04-13 01:09:42.452701 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-13 01:09:42.452706 | orchestrator | Monday 13 April 2026 01:09:19 +0000 (0:00:01.844) 0:00:51.215 ********** 2026-04-13 01:09:42.452711 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:42.452716 | orchestrator | 2026-04-13 01:09:42.452722 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-13 01:09:42.452727 | orchestrator | Monday 13 April 2026 01:09:32 +0000 (0:00:13.494) 0:01:04.710 ********** 2026-04-13 01:09:42.452732 | orchestrator | 2026-04-13 01:09:42.452737 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-13 01:09:42.452743 | orchestrator | Monday 13 April 2026 01:09:32 +0000 (0:00:00.061) 0:01:04.772 ********** 2026-04-13 01:09:42.452748 | orchestrator | 2026-04-13 01:09:42.452756 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-13 01:09:42.452762 | orchestrator | Monday 13 April 2026 01:09:32 +0000 (0:00:00.069) 0:01:04.841 ********** 2026-04-13 01:09:42.452772 | orchestrator | 2026-04-13 01:09:42.452777 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-13 01:09:42.452783 | orchestrator | Monday 13 April 2026 01:09:32 +0000 (0:00:00.093) 0:01:04.935 ********** 2026-04-13 01:09:42.452788 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:09:42.452793 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:09:42.452799 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:09:42.452804 | orchestrator | 2026-04-13 01:09:42.452809 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:09:42.452815 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-13 01:09:42.452822 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 01:09:42.452827 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 01:09:42.452832 | orchestrator | 2026-04-13 01:09:42.452838 | orchestrator | 2026-04-13 01:09:42.452843 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:09:42.452849 | orchestrator | Monday 13 April 2026 01:09:39 +0000 (0:00:06.271) 0:01:11.206 ********** 2026-04-13 01:09:42.452854 | orchestrator | =============================================================================== 2026-04-13 01:09:42.452859 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.49s 2026-04-13 01:09:42.452867 | orchestrator | placement : Restart placement-api container ----------------------------- 6.27s 2026-04-13 01:09:42.452873 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.05s 2026-04-13 01:09:42.452897 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.11s 2026-04-13 01:09:42.452902 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.10s 2026-04-13 01:09:42.452907 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.88s 2026-04-13 01:09:42.452913 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.39s 2026-04-13 01:09:42.452918 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.10s 2026-04-13 01:09:42.452923 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.98s 2026-04-13 01:09:42.452928 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.33s 2026-04-13 01:09:42.452934 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.26s 2026-04-13 01:09:42.452939 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.02s 2026-04-13 01:09:42.452944 | orchestrator | placement : include_tasks ----------------------------------------------- 2.00s 2026-04-13 01:09:42.452950 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.84s 2026-04-13 01:09:42.452955 | orchestrator | placement : Creating placement databases -------------------------------- 1.80s 2026-04-13 01:09:42.452960 | orchestrator | placement : Copying over config.json files for services ----------------- 1.68s 2026-04-13 01:09:42.452965 | orchestrator | placement : Check placement containers ---------------------------------- 1.62s 2026-04-13 01:09:42.452971 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.48s 2026-04-13 01:09:42.452976 | orchestrator | placement : Copying over existing policy file --------------------------- 1.45s 2026-04-13 01:09:42.452981 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-04-13 01:09:45.501701 | orchestrator | 2026-04-13 01:09:45 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:09:45.502708 | orchestrator | 2026-04-13 01:09:45 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:45.504524 | orchestrator | 2026-04-13 01:09:45 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:45.505948 | orchestrator | 2026-04-13 01:09:45 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:45.506176 | orchestrator | 2026-04-13 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:48.548429 | orchestrator | 2026-04-13 01:09:48 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:09:48.548785 | orchestrator | 2026-04-13 01:09:48 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:48.549686 | orchestrator | 2026-04-13 01:09:48 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:48.551080 | orchestrator | 2026-04-13 01:09:48 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:48.551164 | orchestrator | 2026-04-13 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:51.604678 | orchestrator | 2026-04-13 01:09:51 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:09:51.606976 | orchestrator | 2026-04-13 01:09:51 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:51.608515 | orchestrator | 2026-04-13 01:09:51 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:51.610435 | orchestrator | 2026-04-13 01:09:51 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:51.610504 | orchestrator | 2026-04-13 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:54.651966 | orchestrator | 2026-04-13 01:09:54 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:09:54.652861 | orchestrator | 2026-04-13 01:09:54 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:54.656681 | orchestrator | 2026-04-13 01:09:54 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:54.661359 | orchestrator | 2026-04-13 01:09:54 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:54.661425 | orchestrator | 2026-04-13 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:57.712994 | orchestrator | 2026-04-13 01:09:57 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:09:57.714363 | orchestrator | 2026-04-13 01:09:57 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:09:57.715570 | orchestrator | 2026-04-13 01:09:57 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:09:57.719182 | orchestrator | 2026-04-13 01:09:57 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:09:57.719246 | orchestrator | 2026-04-13 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:00.773917 | orchestrator | 2026-04-13 01:10:00 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:00.776932 | orchestrator | 2026-04-13 01:10:00 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:00.778526 | orchestrator | 2026-04-13 01:10:00 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:00.780487 | orchestrator | 2026-04-13 01:10:00 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:10:00.780760 | orchestrator | 2026-04-13 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:03.827741 | orchestrator | 2026-04-13 01:10:03 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:03.830575 | orchestrator | 2026-04-13 01:10:03 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:03.832370 | orchestrator | 2026-04-13 01:10:03 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:03.834613 | orchestrator | 2026-04-13 01:10:03 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:10:03.835064 | orchestrator | 2026-04-13 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:06.883175 | orchestrator | 2026-04-13 01:10:06 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:06.885709 | orchestrator | 2026-04-13 01:10:06 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:06.887353 | orchestrator | 2026-04-13 01:10:06 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:06.889968 | orchestrator | 2026-04-13 01:10:06 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:10:06.890098 | orchestrator | 2026-04-13 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:09.932427 | orchestrator | 2026-04-13 01:10:09 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:09.932508 | orchestrator | 2026-04-13 01:10:09 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:09.933246 | orchestrator | 2026-04-13 01:10:09 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:09.933663 | orchestrator | 2026-04-13 01:10:09 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:10:09.933685 | orchestrator | 2026-04-13 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:12.986614 | orchestrator | 2026-04-13 01:10:12 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:12.987286 | orchestrator | 2026-04-13 01:10:12 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:12.989594 | orchestrator | 2026-04-13 01:10:12 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:12.990176 | orchestrator | 2026-04-13 01:10:12 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:10:12.990237 | orchestrator | 2026-04-13 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:16.041103 | orchestrator | 2026-04-13 01:10:16 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:16.042916 | orchestrator | 2026-04-13 01:10:16 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:16.043595 | orchestrator | 2026-04-13 01:10:16 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:16.045498 | orchestrator | 2026-04-13 01:10:16 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:10:16.045563 | orchestrator | 2026-04-13 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:19.104202 | orchestrator | 2026-04-13 01:10:19 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:19.105894 | orchestrator | 2026-04-13 01:10:19 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:19.108694 | orchestrator | 2026-04-13 01:10:19 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:19.110898 | orchestrator | 2026-04-13 01:10:19 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state STARTED 2026-04-13 01:10:19.111519 | orchestrator | 2026-04-13 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:22.144548 | orchestrator | 2026-04-13 01:10:22 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:22.146135 | orchestrator | 2026-04-13 01:10:22 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:22.147037 | orchestrator | 2026-04-13 01:10:22 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:22.150572 | orchestrator | 2026-04-13 01:10:22 | INFO  | Task 93e320f2-83d8-43c6-836f-48572175cc74 is in state SUCCESS 2026-04-13 01:10:22.150714 | orchestrator | 2026-04-13 01:10:22.152305 | orchestrator | 2026-04-13 01:10:22.152342 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:10:22.152354 | orchestrator | 2026-04-13 01:10:22.152365 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:10:22.152376 | orchestrator | Monday 13 April 2026 01:05:15 +0000 (0:00:00.862) 0:00:00.862 ********** 2026-04-13 01:10:22.152386 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:10:22.152398 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:10:22.152408 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:10:22.152418 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:10:22.152428 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:10:22.152438 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:10:22.152448 | orchestrator | 2026-04-13 01:10:22.152522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:10:22.152535 | orchestrator | Monday 13 April 2026 01:05:16 +0000 (0:00:00.999) 0:00:01.861 ********** 2026-04-13 01:10:22.152546 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-13 01:10:22.152556 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-13 01:10:22.152567 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-13 01:10:22.152577 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-13 01:10:22.152587 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-13 01:10:22.152597 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-13 01:10:22.152607 | orchestrator | 2026-04-13 01:10:22.152617 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-13 01:10:22.152628 | orchestrator | 2026-04-13 01:10:22.152638 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-13 01:10:22.152648 | orchestrator | Monday 13 April 2026 01:05:17 +0000 (0:00:00.806) 0:00:02.667 ********** 2026-04-13 01:10:22.152659 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:10:22.152680 | orchestrator | 2026-04-13 01:10:22.152690 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-13 01:10:22.152700 | orchestrator | Monday 13 April 2026 01:05:18 +0000 (0:00:01.069) 0:00:03.737 ********** 2026-04-13 01:10:22.152711 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:10:22.152721 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:10:22.152731 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:10:22.152741 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:10:22.152751 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:10:22.152761 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:10:22.152771 | orchestrator | 2026-04-13 01:10:22.152781 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-13 01:10:22.152791 | orchestrator | Monday 13 April 2026 01:05:19 +0000 (0:00:01.537) 0:00:05.275 ********** 2026-04-13 01:10:22.152801 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:10:22.152811 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:10:22.152821 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:10:22.152840 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:10:22.152882 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:10:22.152892 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:10:22.152930 | orchestrator | 2026-04-13 01:10:22.152941 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-13 01:10:22.152951 | orchestrator | Monday 13 April 2026 01:05:21 +0000 (0:00:01.494) 0:00:06.769 ********** 2026-04-13 01:10:22.152961 | orchestrator | ok: [testbed-node-0] => { 2026-04-13 01:10:22.152972 | orchestrator |  "changed": false, 2026-04-13 01:10:22.152982 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:10:22.152993 | orchestrator | } 2026-04-13 01:10:22.153003 | orchestrator | ok: [testbed-node-1] => { 2026-04-13 01:10:22.153013 | orchestrator |  "changed": false, 2026-04-13 01:10:22.153023 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:10:22.153034 | orchestrator | } 2026-04-13 01:10:22.153044 | orchestrator | ok: [testbed-node-2] => { 2026-04-13 01:10:22.153054 | orchestrator |  "changed": false, 2026-04-13 01:10:22.153064 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:10:22.153074 | orchestrator | } 2026-04-13 01:10:22.153084 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 01:10:22.153094 | orchestrator |  "changed": false, 2026-04-13 01:10:22.153104 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:10:22.153114 | orchestrator | } 2026-04-13 01:10:22.153124 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 01:10:22.153134 | orchestrator |  "changed": false, 2026-04-13 01:10:22.153145 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:10:22.153155 | orchestrator | } 2026-04-13 01:10:22.153165 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 01:10:22.153175 | orchestrator |  "changed": false, 2026-04-13 01:10:22.153185 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:10:22.153196 | orchestrator | } 2026-04-13 01:10:22.153206 | orchestrator | 2026-04-13 01:10:22.153216 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-13 01:10:22.153226 | orchestrator | Monday 13 April 2026 01:05:21 +0000 (0:00:00.620) 0:00:07.390 ********** 2026-04-13 01:10:22.153236 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.153246 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.153256 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.153266 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.153276 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.153286 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.153296 | orchestrator | 2026-04-13 01:10:22.153320 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-13 01:10:22.153330 | orchestrator | Monday 13 April 2026 01:05:22 +0000 (0:00:00.793) 0:00:08.184 ********** 2026-04-13 01:10:22.153341 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-13 01:10:22.153351 | orchestrator | 2026-04-13 01:10:22.153361 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-13 01:10:22.153371 | orchestrator | Monday 13 April 2026 01:05:26 +0000 (0:00:03.392) 0:00:11.576 ********** 2026-04-13 01:10:22.153381 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-13 01:10:22.153393 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-13 01:10:22.153403 | orchestrator | 2026-04-13 01:10:22.153425 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-13 01:10:22.153436 | orchestrator | Monday 13 April 2026 01:05:32 +0000 (0:00:06.506) 0:00:18.082 ********** 2026-04-13 01:10:22.153447 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:10:22.153457 | orchestrator | 2026-04-13 01:10:22.153467 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-13 01:10:22.153477 | orchestrator | Monday 13 April 2026 01:05:36 +0000 (0:00:03.488) 0:00:21.571 ********** 2026-04-13 01:10:22.153487 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-13 01:10:22.153497 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:10:22.153507 | orchestrator | 2026-04-13 01:10:22.153517 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-13 01:10:22.153532 | orchestrator | Monday 13 April 2026 01:05:39 +0000 (0:00:03.865) 0:00:25.436 ********** 2026-04-13 01:10:22.153542 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:10:22.153552 | orchestrator | 2026-04-13 01:10:22.153563 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-13 01:10:22.153573 | orchestrator | Monday 13 April 2026 01:05:43 +0000 (0:00:03.253) 0:00:28.690 ********** 2026-04-13 01:10:22.153583 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-13 01:10:22.153593 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-13 01:10:22.153603 | orchestrator | 2026-04-13 01:10:22.153613 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-13 01:10:22.153623 | orchestrator | Monday 13 April 2026 01:05:50 +0000 (0:00:07.794) 0:00:36.485 ********** 2026-04-13 01:10:22.153633 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.153644 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.153654 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.153664 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.153674 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.153684 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.153694 | orchestrator | 2026-04-13 01:10:22.153704 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-13 01:10:22.153715 | orchestrator | Monday 13 April 2026 01:05:51 +0000 (0:00:00.602) 0:00:37.087 ********** 2026-04-13 01:10:22.153725 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.153735 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.153745 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.153755 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.153765 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.153775 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.153785 | orchestrator | 2026-04-13 01:10:22.153795 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-13 01:10:22.153805 | orchestrator | Monday 13 April 2026 01:05:54 +0000 (0:00:02.695) 0:00:39.783 ********** 2026-04-13 01:10:22.153815 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:10:22.153825 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:10:22.153835 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:10:22.153860 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:10:22.153871 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:10:22.153881 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:10:22.153891 | orchestrator | 2026-04-13 01:10:22.153901 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-13 01:10:22.153911 | orchestrator | Monday 13 April 2026 01:05:55 +0000 (0:00:01.085) 0:00:40.868 ********** 2026-04-13 01:10:22.153921 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.153931 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.153941 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.153951 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.153961 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.153971 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.153981 | orchestrator | 2026-04-13 01:10:22.153991 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-13 01:10:22.154001 | orchestrator | Monday 13 April 2026 01:05:57 +0000 (0:00:02.512) 0:00:43.381 ********** 2026-04-13 01:10:22.154073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.154110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.154123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.154134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.154145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.154161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.154178 | orchestrator | 2026-04-13 01:10:22.154189 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-13 01:10:22.154200 | orchestrator | Monday 13 April 2026 01:06:00 +0000 (0:00:02.906) 0:00:46.288 ********** 2026-04-13 01:10:22.154210 | orchestrator | [WARNING]: Skipped 2026-04-13 01:10:22.154221 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-13 01:10:22.154231 | orchestrator | due to this access issue: 2026-04-13 01:10:22.154241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-13 01:10:22.154252 | orchestrator | a directory 2026-04-13 01:10:22.154262 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:10:22.154272 | orchestrator | 2026-04-13 01:10:22.154282 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-13 01:10:22.154298 | orchestrator | Monday 13 April 2026 01:06:01 +0000 (0:00:00.944) 0:00:47.233 ********** 2026-04-13 01:10:22.154309 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:10:22.154320 | orchestrator | 2026-04-13 01:10:22.154331 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-13 01:10:22.154341 | orchestrator | Monday 13 April 2026 01:06:02 +0000 (0:00:01.274) 0:00:48.507 ********** 2026-04-13 01:10:22.154352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.154363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.154374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.154395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.154414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.154426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.154454 | orchestrator | 2026-04-13 01:10:22.154465 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-13 01:10:22.154475 | orchestrator | Monday 13 April 2026 01:06:07 +0000 (0:00:04.871) 0:00:53.379 ********** 2026-04-13 01:10:22.154486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.154498 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.154516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.154527 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.154553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.154565 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.154593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.154605 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.154616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.154626 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.154637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.154656 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.154666 | orchestrator | 2026-04-13 01:10:22.154676 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-13 01:10:22.154687 | orchestrator | Monday 13 April 2026 01:06:10 +0000 (0:00:02.895) 0:00:56.274 ********** 2026-04-13 01:10:22.154697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.154713 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.154731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.154742 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.154752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.154763 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.154773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.154790 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.154801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.154811 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.154832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.154896 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.154909 | orchestrator | 2026-04-13 01:10:22.154920 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-13 01:10:22.154930 | orchestrator | Monday 13 April 2026 01:06:14 +0000 (0:00:04.162) 0:01:00.437 ********** 2026-04-13 01:10:22.154940 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.154951 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.154961 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.154972 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.154982 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.154992 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.155003 | orchestrator | 2026-04-13 01:10:22.155013 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-13 01:10:22.155030 | orchestrator | Monday 13 April 2026 01:06:18 +0000 (0:00:03.122) 0:01:03.559 ********** 2026-04-13 01:10:22.155041 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.155051 | orchestrator | 2026-04-13 01:10:22.155061 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-13 01:10:22.155072 | orchestrator | Monday 13 April 2026 01:06:18 +0000 (0:00:00.219) 0:01:03.778 ********** 2026-04-13 01:10:22.155082 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.155092 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.155102 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.155113 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.155123 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.155133 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.155143 | orchestrator | 2026-04-13 01:10:22.155153 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-13 01:10:22.155164 | orchestrator | Monday 13 April 2026 01:06:18 +0000 (0:00:00.579) 0:01:04.358 ********** 2026-04-13 01:10:22.155174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.155194 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.155205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.155216 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.155227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.155247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155259 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.155269 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.155280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155299 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.155309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155320 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.155331 | orchestrator | 2026-04-13 01:10:22.155341 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-13 01:10:22.155351 | orchestrator | Monday 13 April 2026 01:06:22 +0000 (0:00:03.859) 0:01:08.218 ********** 2026-04-13 01:10:22.155362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.155377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.155396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.155414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.155426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.155437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.155448 | orchestrator | 2026-04-13 01:10:22.155459 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-13 01:10:22.155469 | orchestrator | Monday 13 April 2026 01:06:27 +0000 (0:00:04.445) 0:01:12.663 ********** 2026-04-13 01:10:22.155485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.155503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.155521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.155532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.155542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.155557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.155568 | orchestrator | 2026-04-13 01:10:22.155578 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-13 01:10:22.155588 | orchestrator | Monday 13 April 2026 01:06:33 +0000 (0:00:06.755) 0:01:19.419 ********** 2026-04-13 01:10:22.155607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.155624 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.155635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155646 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.155657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.155668 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.155678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.155689 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.155704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155721 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.155738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155749 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.155760 | orchestrator | 2026-04-13 01:10:22.155770 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-13 01:10:22.155781 | orchestrator | Monday 13 April 2026 01:06:36 +0000 (0:00:02.211) 0:01:21.632 ********** 2026-04-13 01:10:22.155791 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.155801 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.155812 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:10:22.155822 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.155832 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:10:22.155860 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:10:22.155871 | orchestrator | 2026-04-13 01:10:22.155882 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-13 01:10:22.155892 | orchestrator | Monday 13 April 2026 01:06:39 +0000 (0:00:02.971) 0:01:24.603 ********** 2026-04-13 01:10:22.155903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155914 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.155925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155936 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.155951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.155969 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.155987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.155999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.156010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.156021 | orchestrator | 2026-04-13 01:10:22.156031 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-13 01:10:22.156042 | orchestrator | Monday 13 April 2026 01:06:43 +0000 (0:00:03.978) 0:01:28.582 ********** 2026-04-13 01:10:22.156053 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156063 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156073 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156083 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156094 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156104 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156115 | orchestrator | 2026-04-13 01:10:22.156125 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-13 01:10:22.156141 | orchestrator | Monday 13 April 2026 01:06:45 +0000 (0:00:02.276) 0:01:30.859 ********** 2026-04-13 01:10:22.156152 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156162 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156173 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156183 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156193 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156204 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156214 | orchestrator | 2026-04-13 01:10:22.156225 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-13 01:10:22.156235 | orchestrator | Monday 13 April 2026 01:06:47 +0000 (0:00:02.441) 0:01:33.300 ********** 2026-04-13 01:10:22.156245 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156255 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156266 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156276 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156286 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156296 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156307 | orchestrator | 2026-04-13 01:10:22.156324 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-13 01:10:22.156335 | orchestrator | Monday 13 April 2026 01:06:51 +0000 (0:00:03.910) 0:01:37.211 ********** 2026-04-13 01:10:22.156345 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156355 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156365 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156375 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156386 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156396 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156406 | orchestrator | 2026-04-13 01:10:22.156416 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-13 01:10:22.156426 | orchestrator | Monday 13 April 2026 01:06:53 +0000 (0:00:02.303) 0:01:39.514 ********** 2026-04-13 01:10:22.156436 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156447 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156457 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156467 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156483 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156494 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156504 | orchestrator | 2026-04-13 01:10:22.156515 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-13 01:10:22.156525 | orchestrator | Monday 13 April 2026 01:06:56 +0000 (0:00:02.222) 0:01:41.736 ********** 2026-04-13 01:10:22.156535 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156545 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156555 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156565 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156575 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156585 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156596 | orchestrator | 2026-04-13 01:10:22.156606 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-13 01:10:22.156616 | orchestrator | Monday 13 April 2026 01:07:01 +0000 (0:00:04.945) 0:01:46.682 ********** 2026-04-13 01:10:22.156626 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-13 01:10:22.156637 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156647 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-13 01:10:22.156657 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156667 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-13 01:10:22.156677 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156687 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-13 01:10:22.156704 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156715 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-13 01:10:22.156725 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156735 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-13 01:10:22.156745 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156755 | orchestrator | 2026-04-13 01:10:22.156766 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-13 01:10:22.156776 | orchestrator | Monday 13 April 2026 01:07:03 +0000 (0:00:02.301) 0:01:48.983 ********** 2026-04-13 01:10:22.156786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.156797 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.156813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.156823 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.156840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.156902 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.156913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.156934 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.156945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.156956 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.156966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.156977 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.156987 | orchestrator | 2026-04-13 01:10:22.156997 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-13 01:10:22.157008 | orchestrator | Monday 13 April 2026 01:07:05 +0000 (0:00:02.239) 0:01:51.223 ********** 2026-04-13 01:10:22.157023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.157035 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.157071 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.157093 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.157115 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.157136 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.157162 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157172 | orchestrator | 2026-04-13 01:10:22.157183 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-13 01:10:22.157193 | orchestrator | Monday 13 April 2026 01:07:07 +0000 (0:00:02.095) 0:01:53.318 ********** 2026-04-13 01:10:22.157204 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157227 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157238 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157248 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157259 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157270 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157280 | orchestrator | 2026-04-13 01:10:22.157290 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-13 01:10:22.157301 | orchestrator | Monday 13 April 2026 01:07:10 +0000 (0:00:02.920) 0:01:56.239 ********** 2026-04-13 01:10:22.157311 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157321 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157331 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157342 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:10:22.157352 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:10:22.157362 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:10:22.157372 | orchestrator | 2026-04-13 01:10:22.157383 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-13 01:10:22.157394 | orchestrator | Monday 13 April 2026 01:07:15 +0000 (0:00:04.562) 0:02:00.801 ********** 2026-04-13 01:10:22.157405 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157415 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157423 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157432 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157440 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157448 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157457 | orchestrator | 2026-04-13 01:10:22.157465 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-13 01:10:22.157474 | orchestrator | Monday 13 April 2026 01:07:17 +0000 (0:00:02.296) 0:02:03.098 ********** 2026-04-13 01:10:22.157483 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157491 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157500 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157508 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157517 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157525 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157534 | orchestrator | 2026-04-13 01:10:22.157542 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-13 01:10:22.157551 | orchestrator | Monday 13 April 2026 01:07:19 +0000 (0:00:02.151) 0:02:05.250 ********** 2026-04-13 01:10:22.157560 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157568 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157577 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157586 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157594 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157602 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157611 | orchestrator | 2026-04-13 01:10:22.157619 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-13 01:10:22.157628 | orchestrator | Monday 13 April 2026 01:07:22 +0000 (0:00:03.109) 0:02:08.359 ********** 2026-04-13 01:10:22.157636 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157645 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157653 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157662 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157670 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157681 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157696 | orchestrator | 2026-04-13 01:10:22.157711 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-13 01:10:22.157725 | orchestrator | Monday 13 April 2026 01:07:26 +0000 (0:00:03.262) 0:02:11.622 ********** 2026-04-13 01:10:22.157747 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157763 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157778 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157792 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157816 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157832 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157865 | orchestrator | 2026-04-13 01:10:22.157880 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-13 01:10:22.157892 | orchestrator | Monday 13 April 2026 01:07:28 +0000 (0:00:02.296) 0:02:13.918 ********** 2026-04-13 01:10:22.157905 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.157919 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.157932 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.157943 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.157952 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.157963 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.157977 | orchestrator | 2026-04-13 01:10:22.157990 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-13 01:10:22.158004 | orchestrator | Monday 13 April 2026 01:07:31 +0000 (0:00:02.736) 0:02:16.655 ********** 2026-04-13 01:10:22.158050 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.158069 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.158084 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.158093 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.158101 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.158109 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.158117 | orchestrator | 2026-04-13 01:10:22.158126 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-13 01:10:22.158140 | orchestrator | Monday 13 April 2026 01:07:33 +0000 (0:00:02.311) 0:02:18.967 ********** 2026-04-13 01:10:22.158149 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-13 01:10:22.158158 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.158166 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-13 01:10:22.158174 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.158182 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-13 01:10:22.158190 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.158198 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-13 01:10:22.158207 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.158226 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-13 01:10:22.158235 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.158243 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-13 01:10:22.158252 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.158261 | orchestrator | 2026-04-13 01:10:22.158269 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-13 01:10:22.158278 | orchestrator | Monday 13 April 2026 01:07:36 +0000 (0:00:02.756) 0:02:21.723 ********** 2026-04-13 01:10:22.158287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.158304 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.158313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.158322 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.158331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.158344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-13 01:10:22.158353 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.158362 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.158376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.158385 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.158394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 01:10:22.158410 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.158419 | orchestrator | 2026-04-13 01:10:22.158427 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-13 01:10:22.158436 | orchestrator | Monday 13 April 2026 01:07:39 +0000 (0:00:03.251) 0:02:24.975 ********** 2026-04-13 01:10:22.158444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.158453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.158472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-13 01:10:22.158481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.158496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.158505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-13 01:10:22.158513 | orchestrator | 2026-04-13 01:10:22.158522 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-13 01:10:22.158530 | orchestrator | Monday 13 April 2026 01:07:42 +0000 (0:00:02.682) 0:02:27.658 ********** 2026-04-13 01:10:22.158539 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:10:22.158547 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:10:22.158555 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:10:22.158563 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:10:22.158572 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:10:22.158580 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:10:22.158588 | orchestrator | 2026-04-13 01:10:22.158596 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-13 01:10:22.158604 | orchestrator | Monday 13 April 2026 01:07:42 +0000 (0:00:00.689) 0:02:28.347 ********** 2026-04-13 01:10:22.158613 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:10:22.158621 | orchestrator | 2026-04-13 01:10:22.158629 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-13 01:10:22.158638 | orchestrator | Monday 13 April 2026 01:07:44 +0000 (0:00:01.871) 0:02:30.218 ********** 2026-04-13 01:10:22.158646 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:10:22.158654 | orchestrator | 2026-04-13 01:10:22.158662 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-13 01:10:22.158670 | orchestrator | Monday 13 April 2026 01:07:46 +0000 (0:00:02.205) 0:02:32.424 ********** 2026-04-13 01:10:22.158678 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:10:22.158686 | orchestrator | 2026-04-13 01:10:22.158701 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-13 01:10:22.158710 | orchestrator | Monday 13 April 2026 01:08:30 +0000 (0:00:43.391) 0:03:15.816 ********** 2026-04-13 01:10:22.158718 | orchestrator | 2026-04-13 01:10:22.158726 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-13 01:10:22.158734 | orchestrator | Monday 13 April 2026 01:08:30 +0000 (0:00:00.293) 0:03:16.110 ********** 2026-04-13 01:10:22.158742 | orchestrator | 2026-04-13 01:10:22.158750 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-13 01:10:22.158759 | orchestrator | Monday 13 April 2026 01:08:30 +0000 (0:00:00.304) 0:03:16.414 ********** 2026-04-13 01:10:22.158767 | orchestrator | 2026-04-13 01:10:22.158776 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-13 01:10:22.158790 | orchestrator | Monday 13 April 2026 01:08:31 +0000 (0:00:00.303) 0:03:16.718 ********** 2026-04-13 01:10:22.158798 | orchestrator | 2026-04-13 01:10:22.158912 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-13 01:10:22.158932 | orchestrator | Monday 13 April 2026 01:08:31 +0000 (0:00:00.200) 0:03:16.919 ********** 2026-04-13 01:10:22.158946 | orchestrator | 2026-04-13 01:10:22.158960 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-13 01:10:22.158974 | orchestrator | Monday 13 April 2026 01:08:31 +0000 (0:00:00.212) 0:03:17.131 ********** 2026-04-13 01:10:22.158987 | orchestrator | 2026-04-13 01:10:22.159001 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-13 01:10:22.159014 | orchestrator | Monday 13 April 2026 01:08:31 +0000 (0:00:00.178) 0:03:17.310 ********** 2026-04-13 01:10:22.159026 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:10:22.159038 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:10:22.159051 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:10:22.159065 | orchestrator | 2026-04-13 01:10:22.159078 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-13 01:10:22.159093 | orchestrator | Monday 13 April 2026 01:09:07 +0000 (0:00:35.720) 0:03:53.031 ********** 2026-04-13 01:10:22.159106 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:10:22.159119 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:10:22.159132 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:10:22.159144 | orchestrator | 2026-04-13 01:10:22.159156 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:10:22.159169 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 01:10:22.159183 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-13 01:10:22.159196 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-13 01:10:22.159209 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 01:10:22.159222 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 01:10:22.159235 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 01:10:22.159249 | orchestrator | 2026-04-13 01:10:22.159262 | orchestrator | 2026-04-13 01:10:22.159276 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:10:22.159289 | orchestrator | Monday 13 April 2026 01:10:19 +0000 (0:01:12.463) 0:05:05.495 ********** 2026-04-13 01:10:22.159302 | orchestrator | =============================================================================== 2026-04-13 01:10:22.159316 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 72.46s 2026-04-13 01:10:22.159329 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.39s 2026-04-13 01:10:22.159342 | orchestrator | neutron : Restart neutron-server container ----------------------------- 35.72s 2026-04-13 01:10:22.159355 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.79s 2026-04-13 01:10:22.159368 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.76s 2026-04-13 01:10:22.159382 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.51s 2026-04-13 01:10:22.159395 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 4.95s 2026-04-13 01:10:22.159409 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.87s 2026-04-13 01:10:22.159437 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.56s 2026-04-13 01:10:22.159450 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.45s 2026-04-13 01:10:22.159463 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.16s 2026-04-13 01:10:22.159476 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.98s 2026-04-13 01:10:22.159490 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.91s 2026-04-13 01:10:22.159503 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.87s 2026-04-13 01:10:22.159516 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.86s 2026-04-13 01:10:22.159530 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.49s 2026-04-13 01:10:22.159551 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.39s 2026-04-13 01:10:22.159565 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.26s 2026-04-13 01:10:22.159579 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.25s 2026-04-13 01:10:22.159593 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.25s 2026-04-13 01:10:22.159607 | orchestrator | 2026-04-13 01:10:22 | INFO  | Task 183a4eed-78d1-4699-8387-f373e925cb3a is in state STARTED 2026-04-13 01:10:22.159622 | orchestrator | 2026-04-13 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:25.186588 | orchestrator | 2026-04-13 01:10:25 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:25.187197 | orchestrator | 2026-04-13 01:10:25 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:25.188204 | orchestrator | 2026-04-13 01:10:25 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:25.188926 | orchestrator | 2026-04-13 01:10:25 | INFO  | Task 183a4eed-78d1-4699-8387-f373e925cb3a is in state STARTED 2026-04-13 01:10:25.188956 | orchestrator | 2026-04-13 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:28.238455 | orchestrator | 2026-04-13 01:10:28 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:28.239813 | orchestrator | 2026-04-13 01:10:28 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:28.241007 | orchestrator | 2026-04-13 01:10:28 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:28.242384 | orchestrator | 2026-04-13 01:10:28 | INFO  | Task 183a4eed-78d1-4699-8387-f373e925cb3a is in state STARTED 2026-04-13 01:10:28.242426 | orchestrator | 2026-04-13 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:31.303379 | orchestrator | 2026-04-13 01:10:31 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:31.304598 | orchestrator | 2026-04-13 01:10:31 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:31.305632 | orchestrator | 2026-04-13 01:10:31 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:31.306888 | orchestrator | 2026-04-13 01:10:31 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:31.307776 | orchestrator | 2026-04-13 01:10:31 | INFO  | Task 183a4eed-78d1-4699-8387-f373e925cb3a is in state SUCCESS 2026-04-13 01:10:31.308384 | orchestrator | 2026-04-13 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:34.358643 | orchestrator | 2026-04-13 01:10:34 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:34.359684 | orchestrator | 2026-04-13 01:10:34 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:34.361599 | orchestrator | 2026-04-13 01:10:34 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:34.363403 | orchestrator | 2026-04-13 01:10:34 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:34.363483 | orchestrator | 2026-04-13 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:37.412162 | orchestrator | 2026-04-13 01:10:37 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:37.413451 | orchestrator | 2026-04-13 01:10:37 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:37.416825 | orchestrator | 2026-04-13 01:10:37 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:37.418975 | orchestrator | 2026-04-13 01:10:37 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:37.419064 | orchestrator | 2026-04-13 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:40.460383 | orchestrator | 2026-04-13 01:10:40 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:40.461243 | orchestrator | 2026-04-13 01:10:40 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:40.462963 | orchestrator | 2026-04-13 01:10:40 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:40.464073 | orchestrator | 2026-04-13 01:10:40 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:40.464105 | orchestrator | 2026-04-13 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:43.514268 | orchestrator | 2026-04-13 01:10:43 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:43.519282 | orchestrator | 2026-04-13 01:10:43 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:43.524365 | orchestrator | 2026-04-13 01:10:43 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:43.527118 | orchestrator | 2026-04-13 01:10:43 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:43.527888 | orchestrator | 2026-04-13 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:46.567021 | orchestrator | 2026-04-13 01:10:46 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:46.569206 | orchestrator | 2026-04-13 01:10:46 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:46.573749 | orchestrator | 2026-04-13 01:10:46 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:46.575746 | orchestrator | 2026-04-13 01:10:46 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:46.575887 | orchestrator | 2026-04-13 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:49.620979 | orchestrator | 2026-04-13 01:10:49 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:49.621234 | orchestrator | 2026-04-13 01:10:49 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:49.622367 | orchestrator | 2026-04-13 01:10:49 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:49.623507 | orchestrator | 2026-04-13 01:10:49 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:49.623532 | orchestrator | 2026-04-13 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:52.671388 | orchestrator | 2026-04-13 01:10:52 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:52.672510 | orchestrator | 2026-04-13 01:10:52 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:10:52.675108 | orchestrator | 2026-04-13 01:10:52 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state STARTED 2026-04-13 01:10:52.677417 | orchestrator | 2026-04-13 01:10:52 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state STARTED 2026-04-13 01:10:52.677459 | orchestrator | 2026-04-13 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:55.728807 | orchestrator | 2026-04-13 01:10:55 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:10:55.731622 | orchestrator | 2026-04-13 01:10:55 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state STARTED 2026-04-13 01:12:55.846382 | orchestrator | 2026-04-13 01:12:55 | INFO  | Task c06c61f8-a39d-41e9-a426-678bc524928f is in state SUCCESS 2026-04-13 01:12:55.851430 | orchestrator | 2026-04-13 01:12:55.851521 | orchestrator | 2026-04-13 01:12:55.851537 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:12:55.851549 | orchestrator | 2026-04-13 01:12:55.851561 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:12:55.851573 | orchestrator | Monday 13 April 2026 01:10:25 +0000 (0:00:00.197) 0:00:00.197 ********** 2026-04-13 01:12:55.851585 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.851597 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:12:55.851609 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:12:55.851620 | orchestrator | 2026-04-13 01:12:55.851632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:12:55.851671 | orchestrator | Monday 13 April 2026 01:10:25 +0000 (0:00:00.541) 0:00:00.739 ********** 2026-04-13 01:12:55.851686 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-13 01:12:55.851807 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-13 01:12:55.851824 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-13 01:12:55.851836 | orchestrator | 2026-04-13 01:12:55.851848 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-13 01:12:55.851859 | orchestrator | 2026-04-13 01:12:55.851871 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-13 01:12:55.851882 | orchestrator | Monday 13 April 2026 01:10:26 +0000 (0:00:00.508) 0:00:01.248 ********** 2026-04-13 01:12:55.851894 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:12:55.851906 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:12:55.851917 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.851928 | orchestrator | 2026-04-13 01:12:55.851940 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:12:55.851953 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:12:55.851967 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:12:55.851997 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:12:55.852011 | orchestrator | 2026-04-13 01:12:55.852108 | orchestrator | 2026-04-13 01:12:55.852147 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:12:55.852161 | orchestrator | Monday 13 April 2026 01:10:27 +0000 (0:00:01.281) 0:00:02.529 ********** 2026-04-13 01:12:55.852175 | orchestrator | =============================================================================== 2026-04-13 01:12:55.852199 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.28s 2026-04-13 01:12:55.852251 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2026-04-13 01:12:55.852266 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-04-13 01:12:55.852286 | orchestrator | 2026-04-13 01:12:55.852297 | orchestrator | 2026-04-13 01:12:55.852309 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:12:55.852320 | orchestrator | 2026-04-13 01:12:55.852332 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:12:55.852343 | orchestrator | Monday 13 April 2026 01:09:18 +0000 (0:00:00.477) 0:00:00.477 ********** 2026-04-13 01:12:55.852354 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.852365 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:12:55.852377 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:12:55.852388 | orchestrator | 2026-04-13 01:12:55.852399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:12:55.852411 | orchestrator | Monday 13 April 2026 01:09:18 +0000 (0:00:00.369) 0:00:00.847 ********** 2026-04-13 01:12:55.852422 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-13 01:12:55.852434 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-13 01:12:55.852445 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-13 01:12:55.852456 | orchestrator | 2026-04-13 01:12:55.852467 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-13 01:12:55.852479 | orchestrator | 2026-04-13 01:12:55.852490 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-13 01:12:55.852501 | orchestrator | Monday 13 April 2026 01:09:18 +0000 (0:00:00.292) 0:00:01.140 ********** 2026-04-13 01:12:55.852512 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.852524 | orchestrator | 2026-04-13 01:12:55.852535 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-13 01:12:55.852547 | orchestrator | Monday 13 April 2026 01:09:20 +0000 (0:00:01.109) 0:00:02.250 ********** 2026-04-13 01:12:55.852559 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-13 01:12:55.852570 | orchestrator | 2026-04-13 01:12:55.852581 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-13 01:12:55.852593 | orchestrator | Monday 13 April 2026 01:09:23 +0000 (0:00:03.712) 0:00:05.962 ********** 2026-04-13 01:12:55.852604 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-13 01:12:55.852616 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-13 01:12:55.852627 | orchestrator | 2026-04-13 01:12:55.852638 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-13 01:12:55.852649 | orchestrator | Monday 13 April 2026 01:09:29 +0000 (0:00:06.173) 0:00:12.136 ********** 2026-04-13 01:12:55.852661 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:12:55.852675 | orchestrator | 2026-04-13 01:12:55.852754 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-13 01:12:55.852766 | orchestrator | Monday 13 April 2026 01:09:33 +0000 (0:00:03.494) 0:00:15.630 ********** 2026-04-13 01:12:55.852795 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-13 01:12:55.852807 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:12:55.852819 | orchestrator | 2026-04-13 01:12:55.852830 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-13 01:12:55.852842 | orchestrator | Monday 13 April 2026 01:09:37 +0000 (0:00:04.147) 0:00:19.778 ********** 2026-04-13 01:12:55.852854 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:12:55.852866 | orchestrator | 2026-04-13 01:12:55.852877 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-13 01:12:55.852889 | orchestrator | Monday 13 April 2026 01:09:40 +0000 (0:00:03.193) 0:00:22.972 ********** 2026-04-13 01:12:55.852900 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-13 01:12:55.852921 | orchestrator | 2026-04-13 01:12:55.852933 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-13 01:12:55.852945 | orchestrator | Monday 13 April 2026 01:09:44 +0000 (0:00:03.792) 0:00:26.764 ********** 2026-04-13 01:12:55.852956 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.852968 | orchestrator | 2026-04-13 01:12:55.852979 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-13 01:12:55.852991 | orchestrator | Monday 13 April 2026 01:09:47 +0000 (0:00:03.366) 0:00:30.130 ********** 2026-04-13 01:12:55.853003 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.853014 | orchestrator | 2026-04-13 01:12:55.853026 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-13 01:12:55.853037 | orchestrator | Monday 13 April 2026 01:09:51 +0000 (0:00:03.950) 0:00:34.080 ********** 2026-04-13 01:12:55.853049 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.853063 | orchestrator | 2026-04-13 01:12:55.853082 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-13 01:12:55.853094 | orchestrator | Monday 13 April 2026 01:09:55 +0000 (0:00:03.804) 0:00:37.885 ********** 2026-04-13 01:12:55.853117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.853198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.853219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.853231 | orchestrator | 2026-04-13 01:12:55.853243 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-13 01:12:55.853255 | orchestrator | Monday 13 April 2026 01:09:57 +0000 (0:00:01.747) 0:00:39.632 ********** 2026-04-13 01:12:55.853267 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.853278 | orchestrator | 2026-04-13 01:12:55.853290 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-13 01:12:55.853301 | orchestrator | Monday 13 April 2026 01:09:57 +0000 (0:00:00.130) 0:00:39.762 ********** 2026-04-13 01:12:55.853313 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.853324 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.853338 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.853355 | orchestrator | 2026-04-13 01:12:55.853367 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-13 01:12:55.853379 | orchestrator | Monday 13 April 2026 01:09:57 +0000 (0:00:00.290) 0:00:40.053 ********** 2026-04-13 01:12:55.853390 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:12:55.853402 | orchestrator | 2026-04-13 01:12:55.853413 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-13 01:12:55.853425 | orchestrator | Monday 13 April 2026 01:09:58 +0000 (0:00:00.934) 0:00:40.988 ********** 2026-04-13 01:12:55.853437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.853508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.853520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.853538 | orchestrator | 2026-04-13 01:12:55.853550 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-13 01:12:55.853562 | orchestrator | Monday 13 April 2026 01:10:01 +0000 (0:00:02.702) 0:00:43.690 ********** 2026-04-13 01:12:55.853573 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.853585 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:12:55.853596 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:12:55.853608 | orchestrator | 2026-04-13 01:12:55.853620 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-13 01:12:55.853637 | orchestrator | Monday 13 April 2026 01:10:02 +0000 (0:00:00.503) 0:00:44.193 ********** 2026-04-13 01:12:55.853650 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.853661 | orchestrator | 2026-04-13 01:12:55.853673 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-13 01:12:55.853877 | orchestrator | Monday 13 April 2026 01:10:02 +0000 (0:00:00.535) 0:00:44.728 ********** 2026-04-13 01:12:55.853900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.853989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.854091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.854117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.854130 | orchestrator | 2026-04-13 01:12:55.854141 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-13 01:12:55.854154 | orchestrator | Monday 13 April 2026 01:10:05 +0000 (0:00:02.495) 0:00:47.224 ********** 2026-04-13 01:12:55.854172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.854185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.854205 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.854218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.854239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.854252 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.854268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.854281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.854292 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.854304 | orchestrator | 2026-04-13 01:12:55.854315 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-13 01:12:55.854336 | orchestrator | Monday 13 April 2026 01:10:06 +0000 (0:00:01.061) 0:00:48.285 ********** 2026-04-13 01:12:55.854348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.854360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.854372 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.854391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.854410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.854422 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.854434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.854452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.854464 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.854476 | orchestrator | 2026-04-13 01:12:55.854487 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-13 01:12:55.854498 | orchestrator | Monday 13 April 2026 01:10:07 +0000 (0:00:00.891) 0:00:49.177 ********** 2026-04-13 01:12:55.855259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.855353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.855386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.855423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.855441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.855483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.855505 | orchestrator | 2026-04-13 01:12:55.855525 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-13 01:12:55.855545 | orchestrator | Monday 13 April 2026 01:10:09 +0000 (0:00:02.625) 0:00:51.802 ********** 2026-04-13 01:12:55.855564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.855593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.855628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.855648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.855681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.855702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.855749 | orchestrator | 2026-04-13 01:12:55.855771 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-13 01:12:55.855790 | orchestrator | Monday 13 April 2026 01:10:15 +0000 (0:00:05.875) 0:00:57.678 ********** 2026-04-13 01:12:55.855820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.855852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.855873 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.855895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.855929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.855950 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.855978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-13 01:12:55.856011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.856033 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.856053 | orchestrator | 2026-04-13 01:12:55.856074 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-13 01:12:55.856095 | orchestrator | Monday 13 April 2026 01:10:16 +0000 (0:00:00.741) 0:00:58.420 ********** 2026-04-13 01:12:55.856114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.856143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.856158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-13 01:12:55.856183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.856196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.856208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.856220 | orchestrator | 2026-04-13 01:12:55.856232 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-13 01:12:55.856244 | orchestrator | Monday 13 April 2026 01:10:18 +0000 (0:00:01.995) 0:01:00.415 ********** 2026-04-13 01:12:55.856255 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.856267 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.856279 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.856291 | orchestrator | 2026-04-13 01:12:55.856302 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-13 01:12:55.856314 | orchestrator | Monday 13 April 2026 01:10:18 +0000 (0:00:00.278) 0:01:00.693 ********** 2026-04-13 01:12:55.856326 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.856337 | orchestrator | 2026-04-13 01:12:55.856349 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-13 01:12:55.856361 | orchestrator | Monday 13 April 2026 01:10:20 +0000 (0:00:02.139) 0:01:02.833 ********** 2026-04-13 01:12:55.856372 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.856384 | orchestrator | 2026-04-13 01:12:55.856396 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-13 01:12:55.856407 | orchestrator | Monday 13 April 2026 01:10:23 +0000 (0:00:02.392) 0:01:05.226 ********** 2026-04-13 01:12:55.856424 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.856436 | orchestrator | 2026-04-13 01:12:55.856448 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-13 01:12:55.856459 | orchestrator | Monday 13 April 2026 01:10:39 +0000 (0:00:16.229) 0:01:21.455 ********** 2026-04-13 01:12:55.856470 | orchestrator | 2026-04-13 01:12:55.856482 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-13 01:12:55.856500 | orchestrator | Monday 13 April 2026 01:10:39 +0000 (0:00:00.255) 0:01:21.711 ********** 2026-04-13 01:12:55.856511 | orchestrator | 2026-04-13 01:12:55.856522 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-13 01:12:55.856534 | orchestrator | Monday 13 April 2026 01:10:39 +0000 (0:00:00.064) 0:01:21.775 ********** 2026-04-13 01:12:55.856545 | orchestrator | 2026-04-13 01:12:55.856556 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-13 01:12:55.856567 | orchestrator | Monday 13 April 2026 01:10:39 +0000 (0:00:00.066) 0:01:21.842 ********** 2026-04-13 01:12:55.856578 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.856589 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.856601 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.856612 | orchestrator | 2026-04-13 01:12:55.856623 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-13 01:12:55.856635 | orchestrator | Monday 13 April 2026 01:10:54 +0000 (0:00:14.460) 0:01:36.303 ********** 2026-04-13 01:12:55.856646 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.856658 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.856669 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.856680 | orchestrator | 2026-04-13 01:12:55.856692 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:12:55.856704 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-13 01:12:55.856785 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 01:12:55.856802 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 01:12:55.856814 | orchestrator | 2026-04-13 01:12:55.856826 | orchestrator | 2026-04-13 01:12:55.856837 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:12:55.856849 | orchestrator | Monday 13 April 2026 01:11:09 +0000 (0:00:15.079) 0:01:51.382 ********** 2026-04-13 01:12:55.856860 | orchestrator | =============================================================================== 2026-04-13 01:12:55.856871 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.23s 2026-04-13 01:12:55.856883 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.08s 2026-04-13 01:12:55.856895 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.46s 2026-04-13 01:12:55.856906 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.17s 2026-04-13 01:12:55.856917 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.88s 2026-04-13 01:12:55.856928 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.15s 2026-04-13 01:12:55.856940 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.95s 2026-04-13 01:12:55.856951 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.80s 2026-04-13 01:12:55.856962 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.79s 2026-04-13 01:12:55.856973 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.71s 2026-04-13 01:12:55.856985 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.49s 2026-04-13 01:12:55.856997 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.37s 2026-04-13 01:12:55.857008 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.19s 2026-04-13 01:12:55.857020 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.70s 2026-04-13 01:12:55.857031 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.63s 2026-04-13 01:12:55.857043 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.50s 2026-04-13 01:12:55.857062 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.39s 2026-04-13 01:12:55.857074 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.14s 2026-04-13 01:12:55.857086 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.00s 2026-04-13 01:12:55.857097 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.75s 2026-04-13 01:12:55.857108 | orchestrator | 2026-04-13 01:12:55.857120 | orchestrator | 2026-04-13 01:12:55.857131 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:12:55.857142 | orchestrator | 2026-04-13 01:12:55.857154 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-13 01:12:55.857165 | orchestrator | Monday 13 April 2026 01:02:40 +0000 (0:00:00.442) 0:00:00.442 ********** 2026-04-13 01:12:55.857177 | orchestrator | changed: [testbed-manager] 2026-04-13 01:12:55.857189 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.857200 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.857212 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.857223 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.857235 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.857246 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.857257 | orchestrator | 2026-04-13 01:12:55.857269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:12:55.857289 | orchestrator | Monday 13 April 2026 01:02:41 +0000 (0:00:01.057) 0:00:01.499 ********** 2026-04-13 01:12:55.857300 | orchestrator | changed: [testbed-manager] 2026-04-13 01:12:55.857312 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.857323 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.857334 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.857346 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.857357 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.857368 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.857379 | orchestrator | 2026-04-13 01:12:55.857391 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:12:55.857402 | orchestrator | Monday 13 April 2026 01:02:42 +0000 (0:00:01.252) 0:00:02.752 ********** 2026-04-13 01:12:55.857413 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-13 01:12:55.857425 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-13 01:12:55.857436 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-13 01:12:55.857447 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-13 01:12:55.857458 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-13 01:12:55.857470 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-13 01:12:55.857481 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-13 01:12:55.857492 | orchestrator | 2026-04-13 01:12:55.857503 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-13 01:12:55.857515 | orchestrator | 2026-04-13 01:12:55.857526 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-13 01:12:55.857538 | orchestrator | Monday 13 April 2026 01:02:43 +0000 (0:00:01.290) 0:00:04.043 ********** 2026-04-13 01:12:55.857549 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.857561 | orchestrator | 2026-04-13 01:12:55.857572 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-13 01:12:55.857583 | orchestrator | Monday 13 April 2026 01:02:45 +0000 (0:00:01.894) 0:00:05.938 ********** 2026-04-13 01:12:55.857600 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-13 01:12:55.857613 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-13 01:12:55.857624 | orchestrator | 2026-04-13 01:12:55.857635 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-13 01:12:55.857647 | orchestrator | Monday 13 April 2026 01:02:50 +0000 (0:00:04.766) 0:00:10.704 ********** 2026-04-13 01:12:55.857676 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 01:12:55.857689 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 01:12:55.857700 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.857712 | orchestrator | 2026-04-13 01:12:55.857796 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-13 01:12:55.857814 | orchestrator | Monday 13 April 2026 01:02:54 +0000 (0:00:04.382) 0:00:15.086 ********** 2026-04-13 01:12:55.857827 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.857839 | orchestrator | 2026-04-13 01:12:55.857851 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-13 01:12:55.857863 | orchestrator | Monday 13 April 2026 01:02:55 +0000 (0:00:00.730) 0:00:15.817 ********** 2026-04-13 01:12:55.857874 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.857886 | orchestrator | 2026-04-13 01:12:55.857897 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-13 01:12:55.857908 | orchestrator | Monday 13 April 2026 01:02:57 +0000 (0:00:01.753) 0:00:17.571 ********** 2026-04-13 01:12:55.857920 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.858013 | orchestrator | 2026-04-13 01:12:55.858080 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-13 01:12:55.858092 | orchestrator | Monday 13 April 2026 01:03:01 +0000 (0:00:03.679) 0:00:21.251 ********** 2026-04-13 01:12:55.858103 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.858115 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.858126 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.858138 | orchestrator | 2026-04-13 01:12:55.858149 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-13 01:12:55.858161 | orchestrator | Monday 13 April 2026 01:03:01 +0000 (0:00:00.587) 0:00:21.838 ********** 2026-04-13 01:12:55.858173 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.858184 | orchestrator | 2026-04-13 01:12:55.858196 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-13 01:12:55.858208 | orchestrator | Monday 13 April 2026 01:03:32 +0000 (0:00:30.658) 0:00:52.497 ********** 2026-04-13 01:12:55.858219 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.858230 | orchestrator | 2026-04-13 01:12:55.858242 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-13 01:12:55.858253 | orchestrator | Monday 13 April 2026 01:03:47 +0000 (0:00:15.256) 0:01:07.754 ********** 2026-04-13 01:12:55.858265 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.858276 | orchestrator | 2026-04-13 01:12:55.858288 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-13 01:12:55.858298 | orchestrator | Monday 13 April 2026 01:04:00 +0000 (0:00:13.140) 0:01:20.894 ********** 2026-04-13 01:12:55.858308 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.858319 | orchestrator | 2026-04-13 01:12:55.858329 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-13 01:12:55.858339 | orchestrator | Monday 13 April 2026 01:04:01 +0000 (0:00:01.233) 0:01:22.127 ********** 2026-04-13 01:12:55.858349 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.858359 | orchestrator | 2026-04-13 01:12:55.858370 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-13 01:12:55.858380 | orchestrator | Monday 13 April 2026 01:04:02 +0000 (0:00:00.475) 0:01:22.603 ********** 2026-04-13 01:12:55.858391 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.858401 | orchestrator | 2026-04-13 01:12:55.858412 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-13 01:12:55.858432 | orchestrator | Monday 13 April 2026 01:04:03 +0000 (0:00:00.694) 0:01:23.298 ********** 2026-04-13 01:12:55.858443 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.858453 | orchestrator | 2026-04-13 01:12:55.858463 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-13 01:12:55.858484 | orchestrator | Monday 13 April 2026 01:04:22 +0000 (0:00:19.268) 0:01:42.566 ********** 2026-04-13 01:12:55.858494 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.858505 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.858515 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.858525 | orchestrator | 2026-04-13 01:12:55.858535 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-13 01:12:55.858545 | orchestrator | 2026-04-13 01:12:55.858556 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-13 01:12:55.858566 | orchestrator | Monday 13 April 2026 01:04:23 +0000 (0:00:00.676) 0:01:43.242 ********** 2026-04-13 01:12:55.858576 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.858586 | orchestrator | 2026-04-13 01:12:55.858597 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-13 01:12:55.858607 | orchestrator | Monday 13 April 2026 01:04:24 +0000 (0:00:01.561) 0:01:44.803 ********** 2026-04-13 01:12:55.858618 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.858628 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.858638 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.858648 | orchestrator | 2026-04-13 01:12:55.858658 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-13 01:12:55.858668 | orchestrator | Monday 13 April 2026 01:04:26 +0000 (0:00:02.162) 0:01:46.966 ********** 2026-04-13 01:12:55.858678 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.858689 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.858699 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.858709 | orchestrator | 2026-04-13 01:12:55.858745 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-13 01:12:55.858760 | orchestrator | Monday 13 April 2026 01:04:29 +0000 (0:00:02.340) 0:01:49.307 ********** 2026-04-13 01:12:55.858771 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.858788 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.858798 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.858808 | orchestrator | 2026-04-13 01:12:55.858818 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-13 01:12:55.858828 | orchestrator | Monday 13 April 2026 01:04:29 +0000 (0:00:00.684) 0:01:49.991 ********** 2026-04-13 01:12:55.858838 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-13 01:12:55.858849 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.858859 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-13 01:12:55.858869 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.858879 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-13 01:12:55.858889 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-13 01:12:55.858899 | orchestrator | 2026-04-13 01:12:55.858909 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-13 01:12:55.858919 | orchestrator | Monday 13 April 2026 01:04:37 +0000 (0:00:07.654) 0:01:57.646 ********** 2026-04-13 01:12:55.858929 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.858939 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.858949 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.858960 | orchestrator | 2026-04-13 01:12:55.858970 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-13 01:12:55.858980 | orchestrator | Monday 13 April 2026 01:04:37 +0000 (0:00:00.406) 0:01:58.053 ********** 2026-04-13 01:12:55.858990 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-13 01:12:55.859001 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.859011 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-13 01:12:55.859021 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859031 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-13 01:12:55.859041 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859051 | orchestrator | 2026-04-13 01:12:55.859060 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-13 01:12:55.859080 | orchestrator | Monday 13 April 2026 01:04:39 +0000 (0:00:01.806) 0:01:59.859 ********** 2026-04-13 01:12:55.859090 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859100 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859110 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.859120 | orchestrator | 2026-04-13 01:12:55.859131 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-13 01:12:55.859141 | orchestrator | Monday 13 April 2026 01:04:40 +0000 (0:00:00.542) 0:02:00.402 ********** 2026-04-13 01:12:55.859151 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859161 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859171 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.859181 | orchestrator | 2026-04-13 01:12:55.859191 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-13 01:12:55.859201 | orchestrator | Monday 13 April 2026 01:04:41 +0000 (0:00:01.311) 0:02:01.713 ********** 2026-04-13 01:12:55.859211 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859221 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859231 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.859241 | orchestrator | 2026-04-13 01:12:55.859251 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-13 01:12:55.859261 | orchestrator | Monday 13 April 2026 01:04:45 +0000 (0:00:03.461) 0:02:05.175 ********** 2026-04-13 01:12:55.859272 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859282 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859292 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.859302 | orchestrator | 2026-04-13 01:12:55.859313 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-13 01:12:55.859323 | orchestrator | Monday 13 April 2026 01:05:06 +0000 (0:00:21.615) 0:02:26.791 ********** 2026-04-13 01:12:55.859333 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859343 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859354 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.859364 | orchestrator | 2026-04-13 01:12:55.859391 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-13 01:12:55.859402 | orchestrator | Monday 13 April 2026 01:05:18 +0000 (0:00:11.691) 0:02:38.482 ********** 2026-04-13 01:12:55.859412 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.859423 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859433 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859443 | orchestrator | 2026-04-13 01:12:55.859454 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-13 01:12:55.859464 | orchestrator | Monday 13 April 2026 01:05:19 +0000 (0:00:01.006) 0:02:39.488 ********** 2026-04-13 01:12:55.859474 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859484 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859494 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.859504 | orchestrator | 2026-04-13 01:12:55.859515 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-13 01:12:55.859525 | orchestrator | Monday 13 April 2026 01:05:32 +0000 (0:00:12.789) 0:02:52.278 ********** 2026-04-13 01:12:55.859535 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.859545 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859555 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859565 | orchestrator | 2026-04-13 01:12:55.859576 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-13 01:12:55.859586 | orchestrator | Monday 13 April 2026 01:05:33 +0000 (0:00:01.345) 0:02:53.623 ********** 2026-04-13 01:12:55.859596 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.859606 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.859616 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.859626 | orchestrator | 2026-04-13 01:12:55.859636 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-13 01:12:55.859654 | orchestrator | 2026-04-13 01:12:55.859665 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-13 01:12:55.859676 | orchestrator | Monday 13 April 2026 01:05:33 +0000 (0:00:00.343) 0:02:53.966 ********** 2026-04-13 01:12:55.859686 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.859696 | orchestrator | 2026-04-13 01:12:55.859711 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-13 01:12:55.859745 | orchestrator | Monday 13 April 2026 01:05:34 +0000 (0:00:00.742) 0:02:54.709 ********** 2026-04-13 01:12:55.859757 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-13 01:12:55.859767 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-13 01:12:55.859778 | orchestrator | 2026-04-13 01:12:55.859788 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-13 01:12:55.859798 | orchestrator | Monday 13 April 2026 01:05:37 +0000 (0:00:03.237) 0:02:57.947 ********** 2026-04-13 01:12:55.859808 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-13 01:12:55.859818 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-13 01:12:55.859828 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-13 01:12:55.859839 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-13 01:12:55.859848 | orchestrator | 2026-04-13 01:12:55.859858 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-13 01:12:55.859868 | orchestrator | Monday 13 April 2026 01:05:44 +0000 (0:00:06.420) 0:03:04.367 ********** 2026-04-13 01:12:55.859879 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:12:55.859889 | orchestrator | 2026-04-13 01:12:55.859900 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-13 01:12:55.859910 | orchestrator | Monday 13 April 2026 01:05:47 +0000 (0:00:03.436) 0:03:07.804 ********** 2026-04-13 01:12:55.859924 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-13 01:12:55.859941 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:12:55.859958 | orchestrator | 2026-04-13 01:12:55.859975 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-13 01:12:55.859990 | orchestrator | Monday 13 April 2026 01:05:51 +0000 (0:00:03.942) 0:03:11.746 ********** 2026-04-13 01:12:55.860006 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:12:55.860024 | orchestrator | 2026-04-13 01:12:55.860040 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-13 01:12:55.860058 | orchestrator | Monday 13 April 2026 01:05:54 +0000 (0:00:03.323) 0:03:15.070 ********** 2026-04-13 01:12:55.860071 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-13 01:12:55.860081 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-13 01:12:55.860099 | orchestrator | 2026-04-13 01:12:55.860116 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-13 01:12:55.860132 | orchestrator | Monday 13 April 2026 01:06:01 +0000 (0:00:06.970) 0:03:22.040 ********** 2026-04-13 01:12:55.860165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.860199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.860226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.860247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.860276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.860305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.860325 | orchestrator | 2026-04-13 01:12:55.860343 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-13 01:12:55.860359 | orchestrator | Monday 13 April 2026 01:06:04 +0000 (0:00:02.283) 0:03:24.324 ********** 2026-04-13 01:12:55.860377 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.860394 | orchestrator | 2026-04-13 01:12:55.860411 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-13 01:12:55.860427 | orchestrator | Monday 13 April 2026 01:06:04 +0000 (0:00:00.652) 0:03:24.977 ********** 2026-04-13 01:12:55.860438 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.860448 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.860458 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.860468 | orchestrator | 2026-04-13 01:12:55.860478 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-13 01:12:55.860489 | orchestrator | Monday 13 April 2026 01:06:05 +0000 (0:00:00.621) 0:03:25.599 ********** 2026-04-13 01:12:55.860505 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:12:55.860516 | orchestrator | 2026-04-13 01:12:55.860526 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-13 01:12:55.860536 | orchestrator | Monday 13 April 2026 01:06:06 +0000 (0:00:01.298) 0:03:26.897 ********** 2026-04-13 01:12:55.860545 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.860556 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.860565 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.860576 | orchestrator | 2026-04-13 01:12:55.860586 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-13 01:12:55.860596 | orchestrator | Monday 13 April 2026 01:06:07 +0000 (0:00:00.307) 0:03:27.204 ********** 2026-04-13 01:12:55.860606 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.860617 | orchestrator | 2026-04-13 01:12:55.860627 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-13 01:12:55.860637 | orchestrator | Monday 13 April 2026 01:06:08 +0000 (0:00:01.086) 0:03:28.292 ********** 2026-04-13 01:12:55.860649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.860677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.860696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.860713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.860790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.860806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.860824 | orchestrator | 2026-04-13 01:12:55.860836 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-13 01:12:55.860846 | orchestrator | Monday 13 April 2026 01:06:11 +0000 (0:00:03.519) 0:03:31.811 ********** 2026-04-13 01:12:55.860866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.860879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.860890 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.860907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.860919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.860936 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.860953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.860965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.860976 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.860986 | orchestrator | 2026-04-13 01:12:55.860996 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-13 01:12:55.861007 | orchestrator | Monday 13 April 2026 01:06:13 +0000 (0:00:01.650) 0:03:33.461 ********** 2026-04-13 01:12:55.861028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.861040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.861057 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.861075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.861088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.861104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.861115 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.861127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.861145 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.861156 | orchestrator | 2026-04-13 01:12:55.861166 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-13 01:12:55.861175 | orchestrator | Monday 13 April 2026 01:06:14 +0000 (0:00:01.570) 0:03:35.031 ********** 2026-04-13 01:12:55.861184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861262 | orchestrator | 2026-04-13 01:12:55.861271 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-13 01:12:55.861280 | orchestrator | Monday 13 April 2026 01:06:18 +0000 (0:00:03.663) 0:03:38.695 ********** 2026-04-13 01:12:55.861292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861364 | orchestrator | 2026-04-13 01:12:55.861373 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-13 01:12:55.861381 | orchestrator | Monday 13 April 2026 01:06:27 +0000 (0:00:09.138) 0:03:47.833 ********** 2026-04-13 01:12:55.861396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.861406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.861414 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.861428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.861438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.861447 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.861459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-13 01:12:55.861473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.861483 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.861491 | orchestrator | 2026-04-13 01:12:55.861500 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-13 01:12:55.861508 | orchestrator | Monday 13 April 2026 01:06:29 +0000 (0:00:01.358) 0:03:49.192 ********** 2026-04-13 01:12:55.861517 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.861526 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.861535 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.861543 | orchestrator | 2026-04-13 01:12:55.861551 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-13 01:12:55.861560 | orchestrator | Monday 13 April 2026 01:06:31 +0000 (0:00:02.740) 0:03:51.932 ********** 2026-04-13 01:12:55.861568 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.861577 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.861585 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.861593 | orchestrator | 2026-04-13 01:12:55.861602 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-13 01:12:55.861610 | orchestrator | Monday 13 April 2026 01:06:32 +0000 (0:00:00.425) 0:03:52.358 ********** 2026-04-13 01:12:55.861625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-13 01:12:55.861696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.861754 | orchestrator | 2026-04-13 01:12:55.861763 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-13 01:12:55.861772 | orchestrator | Monday 13 April 2026 01:06:34 +0000 (0:00:01.965) 0:03:54.323 ********** 2026-04-13 01:12:55.861780 | orchestrator | 2026-04-13 01:12:55.861788 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-13 01:12:55.861803 | orchestrator | Monday 13 April 2026 01:06:34 +0000 (0:00:00.321) 0:03:54.644 ********** 2026-04-13 01:12:55.861812 | orchestrator | 2026-04-13 01:12:55.861820 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-13 01:12:55.861828 | orchestrator | Monday 13 April 2026 01:06:34 +0000 (0:00:00.160) 0:03:54.805 ********** 2026-04-13 01:12:55.861836 | orchestrator | 2026-04-13 01:12:55.861845 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-13 01:12:55.861853 | orchestrator | Monday 13 April 2026 01:06:35 +0000 (0:00:00.539) 0:03:55.345 ********** 2026-04-13 01:12:55.861861 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.861869 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.861878 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.861886 | orchestrator | 2026-04-13 01:12:55.861895 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-13 01:12:55.861903 | orchestrator | Monday 13 April 2026 01:06:56 +0000 (0:00:21.491) 0:04:16.837 ********** 2026-04-13 01:12:55.861912 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.861920 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.861928 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.861936 | orchestrator | 2026-04-13 01:12:55.861945 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-13 01:12:55.861954 | orchestrator | 2026-04-13 01:12:55.861962 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-13 01:12:55.861970 | orchestrator | Monday 13 April 2026 01:07:10 +0000 (0:00:13.782) 0:04:30.619 ********** 2026-04-13 01:12:55.861978 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.861988 | orchestrator | 2026-04-13 01:12:55.861996 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-13 01:12:55.862004 | orchestrator | Monday 13 April 2026 01:07:12 +0000 (0:00:02.267) 0:04:32.886 ********** 2026-04-13 01:12:55.862013 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.862075 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.862084 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.862092 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.862101 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.862110 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.862118 | orchestrator | 2026-04-13 01:12:55.862126 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-13 01:12:55.862135 | orchestrator | Monday 13 April 2026 01:07:13 +0000 (0:00:00.835) 0:04:33.722 ********** 2026-04-13 01:12:55.862143 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.862151 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.862160 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.862168 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:12:55.862177 | orchestrator | 2026-04-13 01:12:55.862185 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-13 01:12:55.862194 | orchestrator | Monday 13 April 2026 01:07:14 +0000 (0:00:01.132) 0:04:34.854 ********** 2026-04-13 01:12:55.862202 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-13 01:12:55.862217 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-13 01:12:55.862226 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-13 01:12:55.862234 | orchestrator | 2026-04-13 01:12:55.862243 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-13 01:12:55.862251 | orchestrator | Monday 13 April 2026 01:07:15 +0000 (0:00:01.120) 0:04:35.974 ********** 2026-04-13 01:12:55.862260 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-13 01:12:55.862268 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-13 01:12:55.862277 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-13 01:12:55.862285 | orchestrator | 2026-04-13 01:12:55.862294 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-13 01:12:55.862309 | orchestrator | Monday 13 April 2026 01:07:17 +0000 (0:00:01.508) 0:04:37.483 ********** 2026-04-13 01:12:55.862318 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-13 01:12:55.862326 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.862335 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-13 01:12:55.862343 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.862352 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-13 01:12:55.862360 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.862368 | orchestrator | 2026-04-13 01:12:55.862377 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-13 01:12:55.862386 | orchestrator | Monday 13 April 2026 01:07:18 +0000 (0:00:00.983) 0:04:38.466 ********** 2026-04-13 01:12:55.862394 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-13 01:12:55.862402 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 01:12:55.862411 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 01:12:55.862420 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.862428 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-13 01:12:55.862437 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 01:12:55.862445 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 01:12:55.862454 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.862462 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 01:12:55.862471 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 01:12:55.862479 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.862487 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-13 01:12:55.862500 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-13 01:12:55.862509 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-13 01:12:55.862517 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-13 01:12:55.862525 | orchestrator | 2026-04-13 01:12:55.862534 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-13 01:12:55.862542 | orchestrator | Monday 13 April 2026 01:07:20 +0000 (0:00:02.362) 0:04:40.828 ********** 2026-04-13 01:12:55.862550 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.862559 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.862567 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.862576 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.862584 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.862592 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.862600 | orchestrator | 2026-04-13 01:12:55.862609 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-13 01:12:55.862617 | orchestrator | Monday 13 April 2026 01:07:22 +0000 (0:00:01.603) 0:04:42.432 ********** 2026-04-13 01:12:55.862632 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.862641 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.862649 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.862657 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.862666 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.862674 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.862682 | orchestrator | 2026-04-13 01:12:55.862691 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-13 01:12:55.862699 | orchestrator | Monday 13 April 2026 01:07:24 +0000 (0:00:01.842) 0:04:44.276 ********** 2026-04-13 01:12:55.862709 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.862927 | orchestrator | 2026-04-13 01:12:55.862936 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-13 01:12:55.862944 | orchestrator | Monday 13 April 2026 01:07:27 +0000 (0:00:03.163) 0:04:47.439 ********** 2026-04-13 01:12:55.862953 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:55.862961 | orchestrator | 2026-04-13 01:12:55.862970 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-13 01:12:55.862978 | orchestrator | Monday 13 April 2026 01:07:28 +0000 (0:00:01.713) 0:04:49.153 ********** 2026-04-13 01:12:55.862991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863029 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863038 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.863159 | orchestrator | 2026-04-13 01:12:55.863168 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-13 01:12:55.863176 | orchestrator | Monday 13 April 2026 01:07:33 +0000 (0:00:04.882) 0:04:54.035 ********** 2026-04-13 01:12:55.863185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.863199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.863208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863223 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.863238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.863247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.863256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863264 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.863278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.863288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.863305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863315 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.863324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.863332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863341 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.863350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.863362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863371 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.863380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.863393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863402 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.863410 | orchestrator | 2026-04-13 01:12:55.863418 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-13 01:12:55.863431 | orchestrator | Monday 13 April 2026 01:07:36 +0000 (0:00:02.375) 0:04:56.411 ********** 2026-04-13 01:12:55.863440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.863449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.863458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863467 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.863481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.863496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863505 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.863517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.863526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.863535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863544 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.863552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.863571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.863580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863594 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.863613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.863628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863641 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.863655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.863669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.863693 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.863705 | orchestrator | 2026-04-13 01:12:55.863763 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-13 01:12:55.863775 | orchestrator | Monday 13 April 2026 01:07:39 +0000 (0:00:03.418) 0:04:59.829 ********** 2026-04-13 01:12:55.863783 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.863791 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.863800 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.863815 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:12:55.863824 | orchestrator | 2026-04-13 01:12:55.863832 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-13 01:12:55.863840 | orchestrator | Monday 13 April 2026 01:07:41 +0000 (0:00:01.685) 0:05:01.515 ********** 2026-04-13 01:12:55.863848 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 01:12:55.863857 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 01:12:55.863865 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 01:12:55.863874 | orchestrator | 2026-04-13 01:12:55.863882 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-13 01:12:55.863890 | orchestrator | Monday 13 April 2026 01:07:42 +0000 (0:00:00.763) 0:05:02.278 ********** 2026-04-13 01:12:55.863898 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 01:12:55.863907 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 01:12:55.863915 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 01:12:55.863923 | orchestrator | 2026-04-13 01:12:55.863932 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-13 01:12:55.863940 | orchestrator | Monday 13 April 2026 01:07:42 +0000 (0:00:00.866) 0:05:03.144 ********** 2026-04-13 01:12:55.863948 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:12:55.863956 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:12:55.863965 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:12:55.863973 | orchestrator | 2026-04-13 01:12:55.863981 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-13 01:12:55.863989 | orchestrator | Monday 13 April 2026 01:07:43 +0000 (0:00:00.476) 0:05:03.620 ********** 2026-04-13 01:12:55.863998 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:12:55.864006 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:12:55.864014 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:12:55.864022 | orchestrator | 2026-04-13 01:12:55.864031 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-13 01:12:55.864039 | orchestrator | Monday 13 April 2026 01:07:43 +0000 (0:00:00.496) 0:05:04.117 ********** 2026-04-13 01:12:55.864048 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-13 01:12:55.864061 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-13 01:12:55.864070 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-13 01:12:55.864078 | orchestrator | 2026-04-13 01:12:55.864086 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-13 01:12:55.864094 | orchestrator | Monday 13 April 2026 01:07:45 +0000 (0:00:01.146) 0:05:05.264 ********** 2026-04-13 01:12:55.864103 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-13 01:12:55.864111 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-13 01:12:55.864119 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-13 01:12:55.864128 | orchestrator | 2026-04-13 01:12:55.864136 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-13 01:12:55.864145 | orchestrator | Monday 13 April 2026 01:07:46 +0000 (0:00:01.729) 0:05:06.994 ********** 2026-04-13 01:12:55.864153 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-13 01:12:55.864161 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-13 01:12:55.864169 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-13 01:12:55.864184 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-13 01:12:55.864192 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-13 01:12:55.864200 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-13 01:12:55.864209 | orchestrator | 2026-04-13 01:12:55.864217 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-13 01:12:55.864225 | orchestrator | Monday 13 April 2026 01:07:50 +0000 (0:00:04.015) 0:05:11.009 ********** 2026-04-13 01:12:55.864233 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.864241 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.864250 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.864258 | orchestrator | 2026-04-13 01:12:55.864266 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-13 01:12:55.864275 | orchestrator | Monday 13 April 2026 01:07:51 +0000 (0:00:00.349) 0:05:11.359 ********** 2026-04-13 01:12:55.864283 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.864291 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.864298 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.864305 | orchestrator | 2026-04-13 01:12:55.864312 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-13 01:12:55.864319 | orchestrator | Monday 13 April 2026 01:07:51 +0000 (0:00:00.304) 0:05:11.663 ********** 2026-04-13 01:12:55.864326 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.864333 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.864340 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.864348 | orchestrator | 2026-04-13 01:12:55.864355 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-13 01:12:55.864362 | orchestrator | Monday 13 April 2026 01:07:53 +0000 (0:00:01.524) 0:05:13.188 ********** 2026-04-13 01:12:55.864369 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-13 01:12:55.864377 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-13 01:12:55.864384 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-13 01:12:55.864391 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-13 01:12:55.864398 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-13 01:12:55.864410 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-13 01:12:55.864418 | orchestrator | 2026-04-13 01:12:55.864425 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-13 01:12:55.864432 | orchestrator | Monday 13 April 2026 01:07:56 +0000 (0:00:03.947) 0:05:17.135 ********** 2026-04-13 01:12:55.864439 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 01:12:55.864446 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 01:12:55.864453 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 01:12:55.864460 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 01:12:55.864467 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.864474 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 01:12:55.864481 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.864488 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 01:12:55.864494 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.864501 | orchestrator | 2026-04-13 01:12:55.864508 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-13 01:12:55.864516 | orchestrator | Monday 13 April 2026 01:08:00 +0000 (0:00:03.436) 0:05:20.571 ********** 2026-04-13 01:12:55.864527 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.864534 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.864541 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.864548 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-13 01:12:55.864555 | orchestrator | 2026-04-13 01:12:55.864562 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-13 01:12:55.864569 | orchestrator | Monday 13 April 2026 01:08:02 +0000 (0:00:02.285) 0:05:22.857 ********** 2026-04-13 01:12:55.864576 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 01:12:55.864583 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 01:12:55.864590 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 01:12:55.864597 | orchestrator | 2026-04-13 01:12:55.864607 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-13 01:12:55.864615 | orchestrator | Monday 13 April 2026 01:08:03 +0000 (0:00:00.995) 0:05:23.853 ********** 2026-04-13 01:12:55.864622 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.864629 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.864635 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.864642 | orchestrator | 2026-04-13 01:12:55.864649 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-13 01:12:55.864656 | orchestrator | Monday 13 April 2026 01:08:04 +0000 (0:00:00.339) 0:05:24.192 ********** 2026-04-13 01:12:55.864663 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.864670 | orchestrator | 2026-04-13 01:12:55.864677 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-13 01:12:55.864684 | orchestrator | Monday 13 April 2026 01:08:04 +0000 (0:00:00.177) 0:05:24.370 ********** 2026-04-13 01:12:55.864691 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.864698 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.864705 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.864712 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.864737 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.864745 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.864752 | orchestrator | 2026-04-13 01:12:55.864759 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-13 01:12:55.864766 | orchestrator | Monday 13 April 2026 01:08:05 +0000 (0:00:00.820) 0:05:25.191 ********** 2026-04-13 01:12:55.864773 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 01:12:55.864780 | orchestrator | 2026-04-13 01:12:55.864787 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-13 01:12:55.864794 | orchestrator | Monday 13 April 2026 01:08:05 +0000 (0:00:00.824) 0:05:26.015 ********** 2026-04-13 01:12:55.864801 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.864807 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.864815 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.864822 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.864829 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.864835 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.864842 | orchestrator | 2026-04-13 01:12:55.864849 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-13 01:12:55.864856 | orchestrator | Monday 13 April 2026 01:08:06 +0000 (0:00:00.573) 0:05:26.588 ********** 2026-04-13 01:12:55.864864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.864984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865016 | orchestrator | 2026-04-13 01:12:55.865023 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-13 01:12:55.865031 | orchestrator | Monday 13 April 2026 01:08:11 +0000 (0:00:05.355) 0:05:31.944 ********** 2026-04-13 01:12:55.865041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.865049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.865056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.865069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.865085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.865098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.865114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.865293 | orchestrator | 2026-04-13 01:12:55.865300 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-13 01:12:55.865307 | orchestrator | Monday 13 April 2026 01:08:19 +0000 (0:00:07.292) 0:05:39.237 ********** 2026-04-13 01:12:55.865314 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.865322 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.865329 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.865335 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.865342 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.865349 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.865356 | orchestrator | 2026-04-13 01:12:55.865363 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-13 01:12:55.865370 | orchestrator | Monday 13 April 2026 01:08:21 +0000 (0:00:02.249) 0:05:41.486 ********** 2026-04-13 01:12:55.865377 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-13 01:12:55.865384 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-13 01:12:55.865391 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-13 01:12:55.865398 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-13 01:12:55.865409 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-13 01:12:55.865416 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-13 01:12:55.865423 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-13 01:12:55.865430 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.865437 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-13 01:12:55.865444 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.865451 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-13 01:12:55.865458 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.865465 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-13 01:12:55.865472 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-13 01:12:55.865479 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-13 01:12:55.865486 | orchestrator | 2026-04-13 01:12:55.865493 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-13 01:12:55.865500 | orchestrator | Monday 13 April 2026 01:08:26 +0000 (0:00:05.226) 0:05:46.712 ********** 2026-04-13 01:12:55.865506 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.865513 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.865521 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.865527 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.865534 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.865541 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.865548 | orchestrator | 2026-04-13 01:12:55.865555 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-13 01:12:55.865562 | orchestrator | Monday 13 April 2026 01:08:27 +0000 (0:00:00.661) 0:05:47.374 ********** 2026-04-13 01:12:55.865572 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-13 01:12:55.865585 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-13 01:12:55.865592 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-13 01:12:55.865599 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-13 01:12:55.865606 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-13 01:12:55.865613 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-13 01:12:55.865620 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-13 01:12:55.865628 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-13 01:12:55.865634 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-13 01:12:55.865641 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-13 01:12:55.865648 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-13 01:12:55.865655 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.865662 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-13 01:12:55.865669 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.865676 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-13 01:12:55.865683 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.865690 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-13 01:12:55.865697 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-13 01:12:55.865704 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-13 01:12:55.865711 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-13 01:12:55.865737 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-13 01:12:55.865746 | orchestrator | 2026-04-13 01:12:55.865752 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-13 01:12:55.865759 | orchestrator | Monday 13 April 2026 01:08:35 +0000 (0:00:07.860) 0:05:55.235 ********** 2026-04-13 01:12:55.865767 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 01:12:55.865774 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 01:12:55.865785 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 01:12:55.865792 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-13 01:12:55.865799 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-13 01:12:55.865806 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 01:12:55.865813 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 01:12:55.865820 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-13 01:12:55.865827 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 01:12:55.865839 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 01:12:55.865846 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 01:12:55.865853 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 01:12:55.865859 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-13 01:12:55.865867 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.865873 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 01:12:55.865880 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-13 01:12:55.865887 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.865894 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-13 01:12:55.865901 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.865912 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 01:12:55.865919 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 01:12:55.865926 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 01:12:55.865933 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 01:12:55.865940 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 01:12:55.865947 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 01:12:55.865954 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 01:12:55.865961 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 01:12:55.865968 | orchestrator | 2026-04-13 01:12:55.865975 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-13 01:12:55.865981 | orchestrator | Monday 13 April 2026 01:08:43 +0000 (0:00:08.230) 0:06:03.465 ********** 2026-04-13 01:12:55.865988 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.865995 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.866002 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.866009 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.866040 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.866049 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.866057 | orchestrator | 2026-04-13 01:12:55.866063 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-13 01:12:55.866071 | orchestrator | Monday 13 April 2026 01:08:43 +0000 (0:00:00.639) 0:06:04.105 ********** 2026-04-13 01:12:55.866078 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.866084 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.866091 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.866098 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.866105 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.866112 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.866119 | orchestrator | 2026-04-13 01:12:55.866126 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-13 01:12:55.866133 | orchestrator | Monday 13 April 2026 01:08:44 +0000 (0:00:00.993) 0:06:05.098 ********** 2026-04-13 01:12:55.866140 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.866147 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.866154 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.866161 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.866168 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.866175 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.866182 | orchestrator | 2026-04-13 01:12:55.866189 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-13 01:12:55.866204 | orchestrator | Monday 13 April 2026 01:08:47 +0000 (0:00:02.418) 0:06:07.517 ********** 2026-04-13 01:12:55.866211 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.866218 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.866225 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.866232 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.866238 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.866245 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.866252 | orchestrator | 2026-04-13 01:12:55.866259 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-13 01:12:55.866266 | orchestrator | Monday 13 April 2026 01:08:49 +0000 (0:00:02.077) 0:06:09.594 ********** 2026-04-13 01:12:55.866280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.866288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.866302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.866310 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.866317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.866325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.866342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.866349 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.866357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-13 01:12:55.866368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-13 01:12:55.866375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.866383 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.866390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.866402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.866410 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.866421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.866428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.866435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-13 01:12:55.866443 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.866453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 01:12:55.866460 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.866467 | orchestrator | 2026-04-13 01:12:55.866475 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-13 01:12:55.866482 | orchestrator | Monday 13 April 2026 01:08:50 +0000 (0:00:01.464) 0:06:11.058 ********** 2026-04-13 01:12:55.866489 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-13 01:12:55.866496 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-13 01:12:55.866508 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.866515 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-13 01:12:55.866522 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-13 01:12:55.866529 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.866536 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-13 01:12:55.866543 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-13 01:12:55.866550 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.866557 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-13 01:12:55.866564 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-13 01:12:55.866571 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.866578 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-13 01:12:55.866585 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-13 01:12:55.866592 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.866599 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-13 01:12:55.866606 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-13 01:12:55.866613 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.866620 | orchestrator | 2026-04-13 01:12:55.866627 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-13 01:12:55.866634 | orchestrator | Monday 13 April 2026 01:08:51 +0000 (0:00:00.970) 0:06:12.029 ********** 2026-04-13 01:12:55.866645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-13 01:12:55.866831 | orchestrator | 2026-04-13 01:12:55.866838 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-13 01:12:55.866848 | orchestrator | Monday 13 April 2026 01:08:54 +0000 (0:00:02.849) 0:06:14.878 ********** 2026-04-13 01:12:55.866855 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.866864 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.866871 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.866878 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.866884 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.866890 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.866897 | orchestrator | 2026-04-13 01:12:55.866903 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-13 01:12:55.866910 | orchestrator | Monday 13 April 2026 01:08:55 +0000 (0:00:00.987) 0:06:15.866 ********** 2026-04-13 01:12:55.866916 | orchestrator | 2026-04-13 01:12:55.866923 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-13 01:12:55.866929 | orchestrator | Monday 13 April 2026 01:08:55 +0000 (0:00:00.137) 0:06:16.004 ********** 2026-04-13 01:12:55.866936 | orchestrator | 2026-04-13 01:12:55.866942 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-13 01:12:55.866949 | orchestrator | Monday 13 April 2026 01:08:55 +0000 (0:00:00.132) 0:06:16.136 ********** 2026-04-13 01:12:55.866955 | orchestrator | 2026-04-13 01:12:55.866961 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-13 01:12:55.866968 | orchestrator | Monday 13 April 2026 01:08:56 +0000 (0:00:00.137) 0:06:16.273 ********** 2026-04-13 01:12:55.866974 | orchestrator | 2026-04-13 01:12:55.866981 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-13 01:12:55.866987 | orchestrator | Monday 13 April 2026 01:08:56 +0000 (0:00:00.137) 0:06:16.411 ********** 2026-04-13 01:12:55.866994 | orchestrator | 2026-04-13 01:12:55.867001 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-13 01:12:55.867007 | orchestrator | Monday 13 April 2026 01:08:56 +0000 (0:00:00.345) 0:06:16.756 ********** 2026-04-13 01:12:55.867013 | orchestrator | 2026-04-13 01:12:55.867020 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-13 01:12:55.867026 | orchestrator | Monday 13 April 2026 01:08:56 +0000 (0:00:00.146) 0:06:16.903 ********** 2026-04-13 01:12:55.867033 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.867039 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.867046 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.867100 | orchestrator | 2026-04-13 01:12:55.867108 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-13 01:12:55.867114 | orchestrator | Monday 13 April 2026 01:09:09 +0000 (0:00:12.358) 0:06:29.262 ********** 2026-04-13 01:12:55.867120 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.867127 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.867133 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.867140 | orchestrator | 2026-04-13 01:12:55.867146 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-13 01:12:55.867153 | orchestrator | Monday 13 April 2026 01:09:33 +0000 (0:00:24.056) 0:06:53.319 ********** 2026-04-13 01:12:55.867159 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.867165 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.867172 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.867179 | orchestrator | 2026-04-13 01:12:55.867185 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-13 01:12:55.867191 | orchestrator | Monday 13 April 2026 01:10:19 +0000 (0:00:46.398) 0:07:39.718 ********** 2026-04-13 01:12:55.867198 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.867204 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.867211 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.867217 | orchestrator | 2026-04-13 01:12:55.867224 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-13 01:12:55.867230 | orchestrator | Monday 13 April 2026 01:11:07 +0000 (0:00:47.630) 0:08:27.348 ********** 2026-04-13 01:12:55.867241 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.867248 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.867255 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.867261 | orchestrator | 2026-04-13 01:12:55.867268 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-13 01:12:55.867278 | orchestrator | Monday 13 April 2026 01:11:07 +0000 (0:00:00.815) 0:08:28.164 ********** 2026-04-13 01:12:55.867285 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.867292 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.867298 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.867305 | orchestrator | 2026-04-13 01:12:55.867311 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-13 01:12:55.867318 | orchestrator | Monday 13 April 2026 01:11:08 +0000 (0:00:00.797) 0:08:28.961 ********** 2026-04-13 01:12:55.867324 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:12:55.867331 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:12:55.867337 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:12:55.867344 | orchestrator | 2026-04-13 01:12:55.867350 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-13 01:12:55.867357 | orchestrator | Monday 13 April 2026 01:11:38 +0000 (0:00:30.082) 0:08:59.043 ********** 2026-04-13 01:12:55.867363 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.867370 | orchestrator | 2026-04-13 01:12:55.867376 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-13 01:12:55.867383 | orchestrator | Monday 13 April 2026 01:11:39 +0000 (0:00:00.333) 0:08:59.377 ********** 2026-04-13 01:12:55.867389 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.867396 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.867403 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.867409 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.867416 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.867422 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-13 01:12:55.867430 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 01:12:55.867436 | orchestrator | 2026-04-13 01:12:55.867443 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-13 01:12:55.867449 | orchestrator | Monday 13 April 2026 01:11:59 +0000 (0:00:20.464) 0:09:19.841 ********** 2026-04-13 01:12:55.867456 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.867462 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.867474 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.867483 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.867493 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.867504 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.867514 | orchestrator | 2026-04-13 01:12:55.867524 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-13 01:12:55.867534 | orchestrator | Monday 13 April 2026 01:12:06 +0000 (0:00:07.218) 0:09:27.060 ********** 2026-04-13 01:12:55.867544 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.867554 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.867564 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.867574 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.867584 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.867595 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-13 01:12:55.867606 | orchestrator | 2026-04-13 01:12:55.867617 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-13 01:12:55.867627 | orchestrator | Monday 13 April 2026 01:12:08 +0000 (0:00:01.911) 0:09:28.972 ********** 2026-04-13 01:12:55.867637 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 01:12:55.867648 | orchestrator | 2026-04-13 01:12:55.867660 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-13 01:12:55.867680 | orchestrator | Monday 13 April 2026 01:12:21 +0000 (0:00:12.842) 0:09:41.815 ********** 2026-04-13 01:12:55.867687 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 01:12:55.867694 | orchestrator | 2026-04-13 01:12:55.867700 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-13 01:12:55.867707 | orchestrator | Monday 13 April 2026 01:12:22 +0000 (0:00:00.940) 0:09:42.756 ********** 2026-04-13 01:12:55.867713 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.867736 | orchestrator | 2026-04-13 01:12:55.867743 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-13 01:12:55.867749 | orchestrator | Monday 13 April 2026 01:12:23 +0000 (0:00:00.946) 0:09:43.702 ********** 2026-04-13 01:12:55.867756 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 01:12:55.867762 | orchestrator | 2026-04-13 01:12:55.867768 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-13 01:12:55.867775 | orchestrator | Monday 13 April 2026 01:12:34 +0000 (0:00:11.417) 0:09:55.120 ********** 2026-04-13 01:12:55.867781 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:12:55.867788 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:12:55.867794 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:12:55.867801 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:55.867807 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:12:55.867814 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:12:55.867820 | orchestrator | 2026-04-13 01:12:55.867827 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-13 01:12:55.867833 | orchestrator | 2026-04-13 01:12:55.867840 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-13 01:12:55.867846 | orchestrator | Monday 13 April 2026 01:12:36 +0000 (0:00:01.833) 0:09:56.954 ********** 2026-04-13 01:12:55.867853 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:55.867859 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:55.867866 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:55.867872 | orchestrator | 2026-04-13 01:12:55.867879 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-13 01:12:55.867885 | orchestrator | 2026-04-13 01:12:55.867892 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-13 01:12:55.867898 | orchestrator | Monday 13 April 2026 01:12:37 +0000 (0:00:01.181) 0:09:58.135 ********** 2026-04-13 01:12:55.867905 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.867911 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.867918 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.867924 | orchestrator | 2026-04-13 01:12:55.867930 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-13 01:12:55.867937 | orchestrator | 2026-04-13 01:12:55.867948 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-13 01:12:55.867955 | orchestrator | Monday 13 April 2026 01:12:38 +0000 (0:00:00.515) 0:09:58.651 ********** 2026-04-13 01:12:55.867961 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-13 01:12:55.867968 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-13 01:12:55.867974 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-13 01:12:55.867981 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-13 01:12:55.867987 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-13 01:12:55.867994 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-13 01:12:55.868000 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-13 01:12:55.868007 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-13 01:12:55.868013 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-13 01:12:55.868020 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-13 01:12:55.868026 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-13 01:12:55.868041 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-13 01:12:55.868047 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:12:55.868054 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-13 01:12:55.868061 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-13 01:12:55.868068 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-13 01:12:55.868074 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-13 01:12:55.868081 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-13 01:12:55.868087 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-13 01:12:55.868093 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:12:55.868105 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-13 01:12:55.868111 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-13 01:12:55.868118 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-13 01:12:55.868124 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-13 01:12:55.868130 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-13 01:12:55.868137 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-13 01:12:55.868143 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:12:55.868150 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-13 01:12:55.868156 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-13 01:12:55.868162 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-13 01:12:55.868169 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-13 01:12:55.868176 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-13 01:12:55.868182 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-13 01:12:55.868188 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.868195 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.868201 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-13 01:12:55.868208 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-13 01:12:55.868215 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-13 01:12:55.868221 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-13 01:12:55.868227 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-13 01:12:55.868234 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-13 01:12:55.868240 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.868247 | orchestrator | 2026-04-13 01:12:55.868253 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-13 01:12:55.868260 | orchestrator | 2026-04-13 01:12:55.868266 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-13 01:12:55.868273 | orchestrator | Monday 13 April 2026 01:12:39 +0000 (0:00:01.349) 0:10:00.001 ********** 2026-04-13 01:12:55.868280 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-13 01:12:55.868286 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-13 01:12:55.868293 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.868300 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-13 01:12:55.868306 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-13 01:12:55.868313 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.868319 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-13 01:12:55.868326 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-13 01:12:55.868332 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.868339 | orchestrator | 2026-04-13 01:12:55.868345 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-13 01:12:55.868358 | orchestrator | 2026-04-13 01:12:55.868365 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-13 01:12:55.868371 | orchestrator | Monday 13 April 2026 01:12:40 +0000 (0:00:00.752) 0:10:00.753 ********** 2026-04-13 01:12:55.868378 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.868384 | orchestrator | 2026-04-13 01:12:55.868391 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-13 01:12:55.868397 | orchestrator | 2026-04-13 01:12:55.868404 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-13 01:12:55.868410 | orchestrator | Monday 13 April 2026 01:12:41 +0000 (0:00:00.733) 0:10:01.487 ********** 2026-04-13 01:12:55.868417 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:55.868423 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:55.868430 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:55.868437 | orchestrator | 2026-04-13 01:12:55.868447 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:12:55.868454 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:12:55.868461 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-13 01:12:55.868468 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-13 01:12:55.868475 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-13 01:12:55.868481 | orchestrator | testbed-node-3 : ok=46  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-13 01:12:55.868488 | orchestrator | testbed-node-4 : ok=40  changed=29  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-13 01:12:55.868495 | orchestrator | testbed-node-5 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-13 01:12:55.868502 | orchestrator | 2026-04-13 01:12:55.868508 | orchestrator | 2026-04-13 01:12:55.868515 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:12:55.868521 | orchestrator | Monday 13 April 2026 01:12:41 +0000 (0:00:00.599) 0:10:02.087 ********** 2026-04-13 01:12:55.868531 | orchestrator | =============================================================================== 2026-04-13 01:12:55.868538 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 47.63s 2026-04-13 01:12:55.868545 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 46.40s 2026-04-13 01:12:55.868552 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.66s 2026-04-13 01:12:55.868558 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.08s 2026-04-13 01:12:55.868565 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 24.06s 2026-04-13 01:12:55.868571 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.62s 2026-04-13 01:12:55.868578 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.49s 2026-04-13 01:12:55.868584 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.46s 2026-04-13 01:12:55.868591 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.27s 2026-04-13 01:12:55.868597 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.26s 2026-04-13 01:12:55.868603 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.78s 2026-04-13 01:12:55.868610 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.14s 2026-04-13 01:12:55.868617 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.84s 2026-04-13 01:12:55.868629 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.79s 2026-04-13 01:12:55.868635 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.36s 2026-04-13 01:12:55.868642 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.69s 2026-04-13 01:12:55.868648 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.42s 2026-04-13 01:12:55.868655 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.14s 2026-04-13 01:12:55.868661 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.23s 2026-04-13 01:12:55.868668 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 7.86s 2026-04-13 01:12:55.868675 | orchestrator | 2026-04-13 01:12:55 | INFO  | Task b4ee6de1-6d7d-4883-bd6a-3aceb7a44a24 is in state SUCCESS 2026-04-13 01:12:55.868681 | orchestrator | 2026-04-13 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:58.889235 | orchestrator | 2026-04-13 01:12:58 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:12:58.894105 | orchestrator | 2026-04-13 01:12:58 | INFO  | Task c92bdec8-da60-4eb7-b31a-a3d97eec7309 is in state SUCCESS 2026-04-13 01:12:58.895191 | orchestrator | 2026-04-13 01:12:58.895241 | orchestrator | 2026-04-13 01:12:58.895255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:12:58.895267 | orchestrator | 2026-04-13 01:12:58.895277 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:12:58.895288 | orchestrator | Monday 13 April 2026 01:09:43 +0000 (0:00:00.382) 0:00:00.382 ********** 2026-04-13 01:12:58.895298 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:58.895309 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:12:58.895319 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:12:58.895328 | orchestrator | 2026-04-13 01:12:58.895339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:12:58.895349 | orchestrator | Monday 13 April 2026 01:09:43 +0000 (0:00:00.296) 0:00:00.679 ********** 2026-04-13 01:12:58.895360 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-13 01:12:58.895370 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-13 01:12:58.895379 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-13 01:12:58.895388 | orchestrator | 2026-04-13 01:12:58.895397 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-13 01:12:58.895406 | orchestrator | 2026-04-13 01:12:58.895414 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-13 01:12:58.895423 | orchestrator | Monday 13 April 2026 01:09:43 +0000 (0:00:00.289) 0:00:00.969 ********** 2026-04-13 01:12:58.895432 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:58.895443 | orchestrator | 2026-04-13 01:12:58.895453 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-13 01:12:58.895461 | orchestrator | Monday 13 April 2026 01:09:44 +0000 (0:00:00.740) 0:00:01.710 ********** 2026-04-13 01:12:58.895474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.895506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.895538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.895550 | orchestrator | 2026-04-13 01:12:58.895560 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-13 01:12:58.895569 | orchestrator | Monday 13 April 2026 01:09:45 +0000 (0:00:01.172) 0:00:02.882 ********** 2026-04-13 01:12:58.895591 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-13 01:12:58.895602 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-13 01:12:58.895611 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:12:58.895620 | orchestrator | 2026-04-13 01:12:58.895630 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-13 01:12:58.895640 | orchestrator | Monday 13 April 2026 01:09:46 +0000 (0:00:00.945) 0:00:03.827 ********** 2026-04-13 01:12:58.896235 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:12:58.896258 | orchestrator | 2026-04-13 01:12:58.896269 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-13 01:12:58.896279 | orchestrator | Monday 13 April 2026 01:09:47 +0000 (0:00:00.527) 0:00:04.354 ********** 2026-04-13 01:12:58.896309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896356 | orchestrator | 2026-04-13 01:12:58.896367 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-13 01:12:58.896384 | orchestrator | Monday 13 April 2026 01:09:48 +0000 (0:00:01.597) 0:00:05.952 ********** 2026-04-13 01:12:58.896396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 01:12:58.896407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 01:12:58.896418 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:58.896428 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:58.896447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 01:12:58.896459 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:58.896469 | orchestrator | 2026-04-13 01:12:58.896479 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-13 01:12:58.896489 | orchestrator | Monday 13 April 2026 01:09:49 +0000 (0:00:00.363) 0:00:06.315 ********** 2026-04-13 01:12:58.896500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 01:12:58.896511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 01:12:58.896530 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:58.896541 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:58.896557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-13 01:12:58.896569 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:58.896579 | orchestrator | 2026-04-13 01:12:58.896590 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-13 01:12:58.896600 | orchestrator | Monday 13 April 2026 01:09:49 +0000 (0:00:00.585) 0:00:06.901 ********** 2026-04-13 01:12:58.896609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896658 | orchestrator | 2026-04-13 01:12:58.896826 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-13 01:12:58.896840 | orchestrator | Monday 13 April 2026 01:09:51 +0000 (0:00:01.568) 0:00:08.469 ********** 2026-04-13 01:12:58.896861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.896897 | orchestrator | 2026-04-13 01:12:58.896907 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-13 01:12:58.896916 | orchestrator | Monday 13 April 2026 01:09:52 +0000 (0:00:01.430) 0:00:09.900 ********** 2026-04-13 01:12:58.896952 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:58.896962 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:58.896971 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:58.896979 | orchestrator | 2026-04-13 01:12:58.896988 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-13 01:12:58.896997 | orchestrator | Monday 13 April 2026 01:09:53 +0000 (0:00:00.378) 0:00:10.278 ********** 2026-04-13 01:12:58.897005 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-13 01:12:58.897015 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-13 01:12:58.897025 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-13 01:12:58.897034 | orchestrator | 2026-04-13 01:12:58.897043 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-13 01:12:58.897052 | orchestrator | Monday 13 April 2026 01:09:54 +0000 (0:00:01.358) 0:00:11.636 ********** 2026-04-13 01:12:58.897060 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-13 01:12:58.897066 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-13 01:12:58.897072 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-13 01:12:58.897078 | orchestrator | 2026-04-13 01:12:58.897083 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-13 01:12:58.897089 | orchestrator | Monday 13 April 2026 01:09:55 +0000 (0:00:01.431) 0:00:13.068 ********** 2026-04-13 01:12:58.897108 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:12:58.897114 | orchestrator | 2026-04-13 01:12:58.897119 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-13 01:12:58.897125 | orchestrator | Monday 13 April 2026 01:09:57 +0000 (0:00:01.152) 0:00:14.221 ********** 2026-04-13 01:12:58.897131 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-13 01:12:58.897136 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-13 01:12:58.897142 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:58.897148 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:12:58.897154 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:12:58.897159 | orchestrator | 2026-04-13 01:12:58.897165 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-13 01:12:58.897171 | orchestrator | Monday 13 April 2026 01:09:57 +0000 (0:00:00.714) 0:00:14.935 ********** 2026-04-13 01:12:58.897177 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:58.897182 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:58.897188 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:58.897194 | orchestrator | 2026-04-13 01:12:58.897199 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-13 01:12:58.897205 | orchestrator | Monday 13 April 2026 01:09:58 +0000 (0:00:00.321) 0:00:15.256 ********** 2026-04-13 01:12:58.897212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1314649, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4314773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1314649, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4314773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1314649, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4314773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1314677, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4431176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1314677, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4431176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1314677, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4431176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1314704, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.453853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1314704, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.453853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1314704, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.453853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314670, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4407485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314670, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4407485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314670, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4407485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1314707, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4560866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1314707, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4560866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1314707, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4560866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1314659, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4359322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1314659, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4359322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1314659, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4359322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1314689, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4472778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1314689, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4472778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1314689, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4472778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1314700, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4507806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1314700, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4507806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1314700, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4507806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314648, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4307966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314648, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4307966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314648, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4307966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314652, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4346662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314652, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4346662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314675, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4407485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314652, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4346662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314675, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4407485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1314695, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4487054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314675, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4407485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1314695, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4487054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1314703, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4521825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1314695, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4487054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1314703, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4521825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314667, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.438277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1314703, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4521825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314667, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.438277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1314698, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4502733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1314698, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4502733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314667, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.438277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1314710, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4560866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1314710, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4560866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1314698, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4502733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1314693, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4472778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1314693, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4472778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1314710, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4560866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1314686, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4452512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1314686, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4452512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1314683, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4442234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1314693, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4472778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1314683, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4442234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1314696, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4493742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1314686, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4452512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1314696, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4493742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1314680, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4438922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1314683, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4442234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1314680, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4438922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1314702, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4515972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1314696, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4493742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1314702, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4515972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1314662, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.436752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1314662, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.436752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1314680, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4438922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314787, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4994533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314787, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4994533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1314702, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4515972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314735, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4744465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314735, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4744465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1314662, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.436752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314723, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4614496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314787, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4994533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1314747, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.477248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314723, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4614496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314713, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.457408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314735, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4744465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1314747, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.477248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314764, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4856026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314723, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4614496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314713, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.457408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314748, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4835582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1314747, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.477248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.897997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314764, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4856026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1314766, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4867468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314713, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.457408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314748, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4835582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314783, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.495252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1314766, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4867468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314764, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4856026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1314762, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.485041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314783, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.495252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314748, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4835582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314743, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4752614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1314762, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.485041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1314766, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4867468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314731, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4682515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314743, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4752614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314783, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.495252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314741, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4744465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314731, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4682515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1314762, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.485041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314725, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4666905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314741, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4744465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314743, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4752614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1314746, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4764996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314725, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4666905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314731, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4682515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314774, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4952185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1314746, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4764996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314741, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4744465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314769, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4892519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314774, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4952185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314725, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4666905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314715, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.458664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314769, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4892519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1314746, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4764996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314719, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4614496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314715, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.458664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314774, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4952185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314760, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.485041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314719, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4614496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314769, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4892519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1314767, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4878137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314760, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.485041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314715, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.458664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1314767, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4878137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314719, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4614496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314760, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.485041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1314767, 'dev': 121, 'nlink': 1, 'atime': 1776038544.0, 'mtime': 1776038544.0, 'ctime': 1776039472.4878137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-13 01:12:58.898373 | orchestrator | 2026-04-13 01:12:58.898381 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-13 01:12:58.898389 | orchestrator | Monday 13 April 2026 01:10:43 +0000 (0:00:45.155) 0:01:00.412 ********** 2026-04-13 01:12:58.898397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.898408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.898416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-13 01:12:58.898425 | orchestrator | 2026-04-13 01:12:58.898433 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-13 01:12:58.898440 | orchestrator | Monday 13 April 2026 01:10:44 +0000 (0:00:01.240) 0:01:01.653 ********** 2026-04-13 01:12:58.898448 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:58.898456 | orchestrator | 2026-04-13 01:12:58.898464 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-13 01:12:58.898472 | orchestrator | Monday 13 April 2026 01:10:46 +0000 (0:00:02.318) 0:01:03.971 ********** 2026-04-13 01:12:58.898480 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:58.898494 | orchestrator | 2026-04-13 01:12:58.898502 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-13 01:12:58.898509 | orchestrator | Monday 13 April 2026 01:10:49 +0000 (0:00:02.370) 0:01:06.342 ********** 2026-04-13 01:12:58.898514 | orchestrator | 2026-04-13 01:12:58.898519 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-13 01:12:58.898524 | orchestrator | Monday 13 April 2026 01:10:49 +0000 (0:00:00.076) 0:01:06.418 ********** 2026-04-13 01:12:58.898529 | orchestrator | 2026-04-13 01:12:58.898534 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-13 01:12:58.898539 | orchestrator | Monday 13 April 2026 01:10:49 +0000 (0:00:00.065) 0:01:06.483 ********** 2026-04-13 01:12:58.898544 | orchestrator | 2026-04-13 01:12:58.898549 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-13 01:12:58.898554 | orchestrator | Monday 13 April 2026 01:10:49 +0000 (0:00:00.080) 0:01:06.564 ********** 2026-04-13 01:12:58.898559 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:58.898568 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:58.898573 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:12:58.898578 | orchestrator | 2026-04-13 01:12:58.898583 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-13 01:12:58.898588 | orchestrator | Monday 13 April 2026 01:10:51 +0000 (0:00:01.824) 0:01:08.389 ********** 2026-04-13 01:12:58.898593 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:58.898598 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:58.898603 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-13 01:12:58.898609 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-13 01:12:58.898618 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:58.898626 | orchestrator | 2026-04-13 01:12:58.898633 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-13 01:12:58.898641 | orchestrator | Monday 13 April 2026 01:11:17 +0000 (0:00:26.417) 0:01:34.806 ********** 2026-04-13 01:12:58.898649 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:58.898657 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:12:58.898664 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:12:58.898672 | orchestrator | 2026-04-13 01:12:58.898680 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-13 01:12:58.898688 | orchestrator | Monday 13 April 2026 01:11:57 +0000 (0:00:39.996) 0:02:14.802 ********** 2026-04-13 01:12:58.898696 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:12:58.898705 | orchestrator | 2026-04-13 01:12:58.898713 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-13 01:12:58.898770 | orchestrator | Monday 13 April 2026 01:12:00 +0000 (0:00:02.379) 0:02:17.182 ********** 2026-04-13 01:12:58.898779 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:58.898786 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:12:58.898794 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:12:58.898803 | orchestrator | 2026-04-13 01:12:58.898811 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-13 01:12:58.898819 | orchestrator | Monday 13 April 2026 01:12:00 +0000 (0:00:00.690) 0:02:17.872 ********** 2026-04-13 01:12:58.898829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-13 01:12:58.898844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-13 01:12:58.898856 | orchestrator | 2026-04-13 01:12:58.898861 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-13 01:12:58.898866 | orchestrator | Monday 13 April 2026 01:12:03 +0000 (0:00:02.872) 0:02:20.745 ********** 2026-04-13 01:12:58.898871 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:12:58.898876 | orchestrator | 2026-04-13 01:12:58.898881 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:12:58.898886 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:12:58.898893 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:12:58.898898 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:12:58.898903 | orchestrator | 2026-04-13 01:12:58.898908 | orchestrator | 2026-04-13 01:12:58.898913 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:12:58.898918 | orchestrator | Monday 13 April 2026 01:12:04 +0000 (0:00:00.510) 0:02:21.256 ********** 2026-04-13 01:12:58.898923 | orchestrator | =============================================================================== 2026-04-13 01:12:58.898928 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 45.16s 2026-04-13 01:12:58.898933 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 40.00s 2026-04-13 01:12:58.898938 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.42s 2026-04-13 01:12:58.898943 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.87s 2026-04-13 01:12:58.898948 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.38s 2026-04-13 01:12:58.898953 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.37s 2026-04-13 01:12:58.898958 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.32s 2026-04-13 01:12:58.898963 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.82s 2026-04-13 01:12:58.898968 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.60s 2026-04-13 01:12:58.898973 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.57s 2026-04-13 01:12:58.898978 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.43s 2026-04-13 01:12:58.898983 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.43s 2026-04-13 01:12:58.898992 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.36s 2026-04-13 01:12:58.898997 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.24s 2026-04-13 01:12:58.899002 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.17s 2026-04-13 01:12:58.899007 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.15s 2026-04-13 01:12:58.899012 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.95s 2026-04-13 01:12:58.899018 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2026-04-13 01:12:58.899023 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.71s 2026-04-13 01:12:58.899028 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 0.69s 2026-04-13 01:12:58.899033 | orchestrator | 2026-04-13 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:01.938951 | orchestrator | 2026-04-13 01:13:01 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:01.939049 | orchestrator | 2026-04-13 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:04.978178 | orchestrator | 2026-04-13 01:13:04 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:04.978371 | orchestrator | 2026-04-13 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:08.016998 | orchestrator | 2026-04-13 01:13:08 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:08.017082 | orchestrator | 2026-04-13 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:11.059098 | orchestrator | 2026-04-13 01:13:11 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:11.059219 | orchestrator | 2026-04-13 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:14.095484 | orchestrator | 2026-04-13 01:13:14 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:14.095584 | orchestrator | 2026-04-13 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:17.137314 | orchestrator | 2026-04-13 01:13:17 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:17.137432 | orchestrator | 2026-04-13 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:20.179985 | orchestrator | 2026-04-13 01:13:20 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:20.180107 | orchestrator | 2026-04-13 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:23.226367 | orchestrator | 2026-04-13 01:13:23 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:23.226500 | orchestrator | 2026-04-13 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:26.283301 | orchestrator | 2026-04-13 01:13:26 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:26.283403 | orchestrator | 2026-04-13 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:29.339393 | orchestrator | 2026-04-13 01:13:29 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:29.339527 | orchestrator | 2026-04-13 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:32.389214 | orchestrator | 2026-04-13 01:13:32 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:32.389328 | orchestrator | 2026-04-13 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:35.438181 | orchestrator | 2026-04-13 01:13:35 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:35.438281 | orchestrator | 2026-04-13 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:38.492166 | orchestrator | 2026-04-13 01:13:38 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:38.492255 | orchestrator | 2026-04-13 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:41.536927 | orchestrator | 2026-04-13 01:13:41 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:41.538769 | orchestrator | 2026-04-13 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:44.579982 | orchestrator | 2026-04-13 01:13:44 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:44.580105 | orchestrator | 2026-04-13 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:47.624075 | orchestrator | 2026-04-13 01:13:47 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:47.624180 | orchestrator | 2026-04-13 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:50.676020 | orchestrator | 2026-04-13 01:13:50 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:50.676138 | orchestrator | 2026-04-13 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:53.713163 | orchestrator | 2026-04-13 01:13:53 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:53.713279 | orchestrator | 2026-04-13 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:56.761631 | orchestrator | 2026-04-13 01:13:56 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:56.761782 | orchestrator | 2026-04-13 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:59.826440 | orchestrator | 2026-04-13 01:13:59 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:13:59.826568 | orchestrator | 2026-04-13 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:02.865774 | orchestrator | 2026-04-13 01:14:02 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:02.865870 | orchestrator | 2026-04-13 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:05.910266 | orchestrator | 2026-04-13 01:14:05 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:05.910492 | orchestrator | 2026-04-13 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:08.954363 | orchestrator | 2026-04-13 01:14:08 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:08.954485 | orchestrator | 2026-04-13 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:12.000174 | orchestrator | 2026-04-13 01:14:11 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:12.000259 | orchestrator | 2026-04-13 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:15.046268 | orchestrator | 2026-04-13 01:14:15 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:15.046360 | orchestrator | 2026-04-13 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:18.094297 | orchestrator | 2026-04-13 01:14:18 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:18.094400 | orchestrator | 2026-04-13 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:21.137431 | orchestrator | 2026-04-13 01:14:21 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:21.137531 | orchestrator | 2026-04-13 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:24.182236 | orchestrator | 2026-04-13 01:14:24 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:24.182338 | orchestrator | 2026-04-13 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:27.236455 | orchestrator | 2026-04-13 01:14:27 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:27.236555 | orchestrator | 2026-04-13 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:30.280514 | orchestrator | 2026-04-13 01:14:30 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:30.280633 | orchestrator | 2026-04-13 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:33.329400 | orchestrator | 2026-04-13 01:14:33 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:33.329484 | orchestrator | 2026-04-13 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:36.377672 | orchestrator | 2026-04-13 01:14:36 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:36.377805 | orchestrator | 2026-04-13 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:39.421103 | orchestrator | 2026-04-13 01:14:39 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:39.421201 | orchestrator | 2026-04-13 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:42.468730 | orchestrator | 2026-04-13 01:14:42 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:42.468857 | orchestrator | 2026-04-13 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:45.510599 | orchestrator | 2026-04-13 01:14:45 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:45.510751 | orchestrator | 2026-04-13 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:48.551420 | orchestrator | 2026-04-13 01:14:48 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:48.551507 | orchestrator | 2026-04-13 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:51.607471 | orchestrator | 2026-04-13 01:14:51 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:51.607546 | orchestrator | 2026-04-13 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:54.653521 | orchestrator | 2026-04-13 01:14:54 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:54.653665 | orchestrator | 2026-04-13 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:57.700586 | orchestrator | 2026-04-13 01:14:57 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:14:57.700726 | orchestrator | 2026-04-13 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:00.747034 | orchestrator | 2026-04-13 01:15:00 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:00.747115 | orchestrator | 2026-04-13 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:03.791554 | orchestrator | 2026-04-13 01:15:03 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:03.791738 | orchestrator | 2026-04-13 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:06.837116 | orchestrator | 2026-04-13 01:15:06 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:06.837224 | orchestrator | 2026-04-13 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:09.880392 | orchestrator | 2026-04-13 01:15:09 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:09.880514 | orchestrator | 2026-04-13 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:12.930757 | orchestrator | 2026-04-13 01:15:12 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:12.930885 | orchestrator | 2026-04-13 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:15.968759 | orchestrator | 2026-04-13 01:15:15 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:15.968820 | orchestrator | 2026-04-13 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:19.006306 | orchestrator | 2026-04-13 01:15:19 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:19.006408 | orchestrator | 2026-04-13 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:22.059991 | orchestrator | 2026-04-13 01:15:22 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:22.060111 | orchestrator | 2026-04-13 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:25.100557 | orchestrator | 2026-04-13 01:15:25 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:25.100710 | orchestrator | 2026-04-13 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:28.147086 | orchestrator | 2026-04-13 01:15:28 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state STARTED 2026-04-13 01:15:28.147226 | orchestrator | 2026-04-13 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:31.195316 | orchestrator | 2026-04-13 01:15:31 | INFO  | Task fd1b0ae9-6ae9-4a68-889e-92849f499e01 is in state SUCCESS 2026-04-13 01:15:31.197048 | orchestrator | 2026-04-13 01:15:31.197752 | orchestrator | 2026-04-13 01:15:31.197781 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:15:31.197791 | orchestrator | 2026-04-13 01:15:31.197800 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:15:31.197808 | orchestrator | Monday 13 April 2026 01:10:33 +0000 (0:00:00.557) 0:00:00.557 ********** 2026-04-13 01:15:31.197815 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.197823 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:15:31.197830 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:15:31.197837 | orchestrator | 2026-04-13 01:15:31.197844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:15:31.197851 | orchestrator | Monday 13 April 2026 01:10:33 +0000 (0:00:00.394) 0:00:00.952 ********** 2026-04-13 01:15:31.197859 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-13 01:15:31.197867 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-13 01:15:31.197874 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-13 01:15:31.197881 | orchestrator | 2026-04-13 01:15:31.197888 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-13 01:15:31.197895 | orchestrator | 2026-04-13 01:15:31.197903 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-13 01:15:31.197910 | orchestrator | Monday 13 April 2026 01:10:33 +0000 (0:00:00.311) 0:00:01.263 ********** 2026-04-13 01:15:31.197918 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:15:31.197926 | orchestrator | 2026-04-13 01:15:31.197933 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-13 01:15:31.197939 | orchestrator | Monday 13 April 2026 01:10:34 +0000 (0:00:00.682) 0:00:01.945 ********** 2026-04-13 01:15:31.197947 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-13 01:15:31.197955 | orchestrator | 2026-04-13 01:15:31.197962 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-13 01:15:31.197969 | orchestrator | Monday 13 April 2026 01:10:38 +0000 (0:00:03.812) 0:00:05.758 ********** 2026-04-13 01:15:31.197976 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-13 01:15:31.197984 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-13 01:15:31.197991 | orchestrator | 2026-04-13 01:15:31.197999 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-13 01:15:31.198006 | orchestrator | Monday 13 April 2026 01:10:44 +0000 (0:00:06.362) 0:00:12.121 ********** 2026-04-13 01:15:31.198073 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-13 01:15:31.198084 | orchestrator | 2026-04-13 01:15:31.198091 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-13 01:15:31.198098 | orchestrator | Monday 13 April 2026 01:10:48 +0000 (0:00:03.329) 0:00:15.451 ********** 2026-04-13 01:15:31.198104 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-13 01:15:31.198142 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-13 01:15:31.198291 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-13 01:15:31.198298 | orchestrator | 2026-04-13 01:15:31.198304 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-13 01:15:31.198310 | orchestrator | Monday 13 April 2026 01:10:56 +0000 (0:00:08.310) 0:00:23.761 ********** 2026-04-13 01:15:31.198320 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-13 01:15:31.198327 | orchestrator | 2026-04-13 01:15:31.198336 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-13 01:15:31.198344 | orchestrator | Monday 13 April 2026 01:10:59 +0000 (0:00:03.163) 0:00:26.925 ********** 2026-04-13 01:15:31.198372 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-13 01:15:31.198381 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-13 01:15:31.198387 | orchestrator | 2026-04-13 01:15:31.198395 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-13 01:15:31.198486 | orchestrator | Monday 13 April 2026 01:11:07 +0000 (0:00:07.803) 0:00:34.728 ********** 2026-04-13 01:15:31.198495 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-13 01:15:31.198502 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-13 01:15:31.198509 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-13 01:15:31.198516 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-13 01:15:31.198523 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-13 01:15:31.198530 | orchestrator | 2026-04-13 01:15:31.198555 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-13 01:15:31.198565 | orchestrator | Monday 13 April 2026 01:11:23 +0000 (0:00:15.809) 0:00:50.538 ********** 2026-04-13 01:15:31.198574 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:15:31.198583 | orchestrator | 2026-04-13 01:15:31.198607 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-13 01:15:31.198617 | orchestrator | Monday 13 April 2026 01:11:23 +0000 (0:00:00.728) 0:00:51.267 ********** 2026-04-13 01:15:31.198626 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.198635 | orchestrator | 2026-04-13 01:15:31.198645 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-13 01:15:31.198654 | orchestrator | Monday 13 April 2026 01:11:28 +0000 (0:00:04.994) 0:00:56.261 ********** 2026-04-13 01:15:31.198662 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.198672 | orchestrator | 2026-04-13 01:15:31.198679 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-13 01:15:31.198751 | orchestrator | Monday 13 April 2026 01:11:33 +0000 (0:00:04.695) 0:01:00.957 ********** 2026-04-13 01:15:31.198760 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.198767 | orchestrator | 2026-04-13 01:15:31.198773 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-13 01:15:31.198780 | orchestrator | Monday 13 April 2026 01:11:36 +0000 (0:00:03.436) 0:01:04.393 ********** 2026-04-13 01:15:31.198787 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-13 01:15:31.198794 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-13 01:15:31.198800 | orchestrator | 2026-04-13 01:15:31.198806 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-13 01:15:31.198812 | orchestrator | Monday 13 April 2026 01:11:46 +0000 (0:00:09.852) 0:01:14.246 ********** 2026-04-13 01:15:31.198818 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-13 01:15:31.198825 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-13 01:15:31.198832 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-13 01:15:31.198850 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-13 01:15:31.198856 | orchestrator | 2026-04-13 01:15:31.198862 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-13 01:15:31.198868 | orchestrator | Monday 13 April 2026 01:12:05 +0000 (0:00:18.262) 0:01:32.509 ********** 2026-04-13 01:15:31.198875 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.198881 | orchestrator | 2026-04-13 01:15:31.198888 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-13 01:15:31.198895 | orchestrator | Monday 13 April 2026 01:12:09 +0000 (0:00:04.465) 0:01:36.974 ********** 2026-04-13 01:15:31.198901 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.198908 | orchestrator | 2026-04-13 01:15:31.198914 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-13 01:15:31.198920 | orchestrator | Monday 13 April 2026 01:12:15 +0000 (0:00:05.561) 0:01:42.535 ********** 2026-04-13 01:15:31.198927 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:15:31.198933 | orchestrator | 2026-04-13 01:15:31.198939 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-13 01:15:31.198946 | orchestrator | Monday 13 April 2026 01:12:15 +0000 (0:00:00.631) 0:01:43.167 ********** 2026-04-13 01:15:31.198952 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.198958 | orchestrator | 2026-04-13 01:15:31.198965 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-13 01:15:31.198971 | orchestrator | Monday 13 April 2026 01:12:20 +0000 (0:00:04.746) 0:01:47.913 ********** 2026-04-13 01:15:31.198977 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:15:31.198984 | orchestrator | 2026-04-13 01:15:31.198991 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-13 01:15:31.198997 | orchestrator | Monday 13 April 2026 01:12:21 +0000 (0:00:00.824) 0:01:48.737 ********** 2026-04-13 01:15:31.199004 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.199011 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.199017 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.199023 | orchestrator | 2026-04-13 01:15:31.199030 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-13 01:15:31.199036 | orchestrator | Monday 13 April 2026 01:12:27 +0000 (0:00:06.163) 0:01:54.901 ********** 2026-04-13 01:15:31.199042 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.199048 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.199054 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.199060 | orchestrator | 2026-04-13 01:15:31.199090 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-13 01:15:31.199097 | orchestrator | Monday 13 April 2026 01:12:32 +0000 (0:00:04.690) 0:01:59.591 ********** 2026-04-13 01:15:31.199109 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.199116 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.199122 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.199129 | orchestrator | 2026-04-13 01:15:31.199136 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-13 01:15:31.199143 | orchestrator | Monday 13 April 2026 01:12:33 +0000 (0:00:00.824) 0:02:00.416 ********** 2026-04-13 01:15:31.199150 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.199156 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:15:31.199162 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:15:31.199168 | orchestrator | 2026-04-13 01:15:31.199175 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-13 01:15:31.199182 | orchestrator | Monday 13 April 2026 01:12:34 +0000 (0:00:01.729) 0:02:02.145 ********** 2026-04-13 01:15:31.199189 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.199201 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.199208 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.199214 | orchestrator | 2026-04-13 01:15:31.199221 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-13 01:15:31.199227 | orchestrator | Monday 13 April 2026 01:12:36 +0000 (0:00:01.371) 0:02:03.517 ********** 2026-04-13 01:15:31.199234 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.199244 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.199251 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.199258 | orchestrator | 2026-04-13 01:15:31.199264 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-13 01:15:31.199271 | orchestrator | Monday 13 April 2026 01:12:37 +0000 (0:00:01.179) 0:02:04.696 ********** 2026-04-13 01:15:31.199277 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.199283 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.199289 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.199296 | orchestrator | 2026-04-13 01:15:31.199339 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-13 01:15:31.199350 | orchestrator | Monday 13 April 2026 01:12:39 +0000 (0:00:02.344) 0:02:07.041 ********** 2026-04-13 01:15:31.199361 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.199371 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.199381 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.199391 | orchestrator | 2026-04-13 01:15:31.199401 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-13 01:15:31.199407 | orchestrator | Monday 13 April 2026 01:12:41 +0000 (0:00:01.685) 0:02:08.727 ********** 2026-04-13 01:15:31.199412 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.199418 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:15:31.199425 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:15:31.199431 | orchestrator | 2026-04-13 01:15:31.199438 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-13 01:15:31.199444 | orchestrator | Monday 13 April 2026 01:12:41 +0000 (0:00:00.622) 0:02:09.349 ********** 2026-04-13 01:15:31.199450 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.199457 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:15:31.199463 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:15:31.199469 | orchestrator | 2026-04-13 01:15:31.199476 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-13 01:15:31.199482 | orchestrator | Monday 13 April 2026 01:12:44 +0000 (0:00:02.783) 0:02:12.133 ********** 2026-04-13 01:15:31.199488 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:15:31.199495 | orchestrator | 2026-04-13 01:15:31.199501 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-13 01:15:31.199508 | orchestrator | Monday 13 April 2026 01:12:45 +0000 (0:00:00.722) 0:02:12.855 ********** 2026-04-13 01:15:31.199515 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.199521 | orchestrator | 2026-04-13 01:15:31.199528 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-13 01:15:31.199535 | orchestrator | Monday 13 April 2026 01:12:48 +0000 (0:00:03.345) 0:02:16.201 ********** 2026-04-13 01:15:31.199541 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.199548 | orchestrator | 2026-04-13 01:15:31.199555 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-13 01:15:31.199562 | orchestrator | Monday 13 April 2026 01:12:51 +0000 (0:00:03.178) 0:02:19.379 ********** 2026-04-13 01:15:31.199568 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-13 01:15:31.199575 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-13 01:15:31.199581 | orchestrator | 2026-04-13 01:15:31.199588 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-13 01:15:31.199613 | orchestrator | Monday 13 April 2026 01:12:59 +0000 (0:00:07.684) 0:02:27.064 ********** 2026-04-13 01:15:31.199620 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.199634 | orchestrator | 2026-04-13 01:15:31.199641 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-13 01:15:31.199648 | orchestrator | Monday 13 April 2026 01:13:03 +0000 (0:00:03.468) 0:02:30.533 ********** 2026-04-13 01:15:31.199654 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:15:31.199660 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:15:31.199666 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:15:31.199672 | orchestrator | 2026-04-13 01:15:31.199678 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-13 01:15:31.199684 | orchestrator | Monday 13 April 2026 01:13:03 +0000 (0:00:00.341) 0:02:30.874 ********** 2026-04-13 01:15:31.199699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.199736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.199744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.199751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.199766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.199772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.199783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.199873 | orchestrator | 2026-04-13 01:15:31.199879 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-13 01:15:31.199886 | orchestrator | Monday 13 April 2026 01:13:06 +0000 (0:00:02.929) 0:02:33.804 ********** 2026-04-13 01:15:31.199892 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:15:31.199899 | orchestrator | 2026-04-13 01:15:31.199921 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-13 01:15:31.199928 | orchestrator | Monday 13 April 2026 01:13:06 +0000 (0:00:00.142) 0:02:33.947 ********** 2026-04-13 01:15:31.199935 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:15:31.199942 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:15:31.199948 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:15:31.199955 | orchestrator | 2026-04-13 01:15:31.199962 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-13 01:15:31.199968 | orchestrator | Monday 13 April 2026 01:13:06 +0000 (0:00:00.322) 0:02:34.269 ********** 2026-04-13 01:15:31.199975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.199987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.199994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200017 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:15:31.200041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200086 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:15:31.200093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200170 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:15:31.200178 | orchestrator | 2026-04-13 01:15:31.200184 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-13 01:15:31.200191 | orchestrator | Monday 13 April 2026 01:13:07 +0000 (0:00:00.682) 0:02:34.952 ********** 2026-04-13 01:15:31.200199 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:15:31.200205 | orchestrator | 2026-04-13 01:15:31.200212 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-13 01:15:31.200219 | orchestrator | Monday 13 April 2026 01:13:08 +0000 (0:00:00.778) 0:02:35.730 ********** 2026-04-13 01:15:31.200230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.200256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.200268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.200274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.200281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.200288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.200298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200480 | orchestrator | 2026-04-13 01:15:31.200487 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-13 01:15:31.200493 | orchestrator | Monday 13 April 2026 01:13:13 +0000 (0:00:05.217) 0:02:40.948 ********** 2026-04-13 01:15:31.200500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200538 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:15:31.200550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200588 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:15:31.200612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200658 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:15:31.200664 | orchestrator | 2026-04-13 01:15:31.200670 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-13 01:15:31.200677 | orchestrator | Monday 13 April 2026 01:13:14 +0000 (0:00:00.695) 0:02:41.643 ********** 2026-04-13 01:15:31.200686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200729 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:15:31.200737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200790 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:15:31.200796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 01:15:31.200803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 01:15:31.200810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 01:15:31.200831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 01:15:31.200838 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:15:31.200845 | orchestrator | 2026-04-13 01:15:31.200852 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-13 01:15:31.200858 | orchestrator | Monday 13 April 2026 01:13:15 +0000 (0:00:01.216) 0:02:42.859 ********** 2026-04-13 01:15:31.200872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.200878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.200883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.200895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.200902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.200908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.200919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.200996 | orchestrator | 2026-04-13 01:15:31.201002 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-13 01:15:31.201008 | orchestrator | Monday 13 April 2026 01:13:20 +0000 (0:00:05.359) 0:02:48.219 ********** 2026-04-13 01:15:31.201015 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-13 01:15:31.201022 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-13 01:15:31.201029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-13 01:15:31.201036 | orchestrator | 2026-04-13 01:15:31.201042 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-13 01:15:31.201080 | orchestrator | Monday 13 April 2026 01:13:22 +0000 (0:00:01.699) 0:02:49.918 ********** 2026-04-13 01:15:31.201088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.201098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.201109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.201116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.201123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.201141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.201166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201240 | orchestrator | 2026-04-13 01:15:31.201246 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-13 01:15:31.201253 | orchestrator | Monday 13 April 2026 01:13:39 +0000 (0:00:17.134) 0:03:07.053 ********** 2026-04-13 01:15:31.201277 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.201284 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.201291 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.201297 | orchestrator | 2026-04-13 01:15:31.201304 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-13 01:15:31.201311 | orchestrator | Monday 13 April 2026 01:13:41 +0000 (0:00:02.151) 0:03:09.204 ********** 2026-04-13 01:15:31.201317 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201324 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201334 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201443 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201449 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201456 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201462 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201468 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201474 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201481 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201487 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201493 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201499 | orchestrator | 2026-04-13 01:15:31.201506 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-13 01:15:31.201518 | orchestrator | Monday 13 April 2026 01:13:47 +0000 (0:00:05.269) 0:03:14.474 ********** 2026-04-13 01:15:31.201524 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201531 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201537 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201544 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201550 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201557 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201564 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201571 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201578 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201584 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201638 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201647 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201654 | orchestrator | 2026-04-13 01:15:31.201660 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-13 01:15:31.201668 | orchestrator | Monday 13 April 2026 01:13:52 +0000 (0:00:05.613) 0:03:20.087 ********** 2026-04-13 01:15:31.201675 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201681 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201688 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-13 01:15:31.201695 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201702 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201709 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-13 01:15:31.201716 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201742 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201750 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-13 01:15:31.201758 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201764 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201770 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-13 01:15:31.201776 | orchestrator | 2026-04-13 01:15:31.201782 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-13 01:15:31.201788 | orchestrator | Monday 13 April 2026 01:13:58 +0000 (0:00:05.333) 0:03:25.420 ********** 2026-04-13 01:15:31.201796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.201856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.201876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 01:15:31.201883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.201915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.201926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-13 01:15:31.201934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.201965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.202162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.202173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-13 01:15:31.202184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.202194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.202209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-13 01:15:31.202213 | orchestrator | 2026-04-13 01:15:31.202220 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-13 01:15:31.202227 | orchestrator | Monday 13 April 2026 01:14:02 +0000 (0:00:03.996) 0:03:29.416 ********** 2026-04-13 01:15:31.202233 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:15:31.202239 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:15:31.202246 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:15:31.202252 | orchestrator | 2026-04-13 01:15:31.202260 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-13 01:15:31.202268 | orchestrator | Monday 13 April 2026 01:14:02 +0000 (0:00:00.530) 0:03:29.947 ********** 2026-04-13 01:15:31.202274 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202351 | orchestrator | 2026-04-13 01:15:31.202358 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-13 01:15:31.202364 | orchestrator | Monday 13 April 2026 01:14:04 +0000 (0:00:02.119) 0:03:32.066 ********** 2026-04-13 01:15:31.202372 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202376 | orchestrator | 2026-04-13 01:15:31.202380 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-13 01:15:31.202384 | orchestrator | Monday 13 April 2026 01:14:06 +0000 (0:00:02.139) 0:03:34.206 ********** 2026-04-13 01:15:31.202388 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202392 | orchestrator | 2026-04-13 01:15:31.202396 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-13 01:15:31.202400 | orchestrator | Monday 13 April 2026 01:14:09 +0000 (0:00:02.264) 0:03:36.470 ********** 2026-04-13 01:15:31.202404 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202408 | orchestrator | 2026-04-13 01:15:31.202412 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-13 01:15:31.202416 | orchestrator | Monday 13 April 2026 01:14:11 +0000 (0:00:02.236) 0:03:38.707 ********** 2026-04-13 01:15:31.202422 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202429 | orchestrator | 2026-04-13 01:15:31.202436 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-13 01:15:31.202443 | orchestrator | Monday 13 April 2026 01:14:33 +0000 (0:00:22.123) 0:04:00.830 ********** 2026-04-13 01:15:31.202450 | orchestrator | 2026-04-13 01:15:31.202457 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-13 01:15:31.202476 | orchestrator | Monday 13 April 2026 01:14:33 +0000 (0:00:00.067) 0:04:00.898 ********** 2026-04-13 01:15:31.202480 | orchestrator | 2026-04-13 01:15:31.202484 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-13 01:15:31.202489 | orchestrator | Monday 13 April 2026 01:14:33 +0000 (0:00:00.066) 0:04:00.965 ********** 2026-04-13 01:15:31.202492 | orchestrator | 2026-04-13 01:15:31.202496 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-13 01:15:31.202501 | orchestrator | Monday 13 April 2026 01:14:33 +0000 (0:00:00.066) 0:04:01.031 ********** 2026-04-13 01:15:31.202505 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202509 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.202513 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.202521 | orchestrator | 2026-04-13 01:15:31.202526 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-13 01:15:31.202530 | orchestrator | Monday 13 April 2026 01:14:50 +0000 (0:00:16.795) 0:04:17.827 ********** 2026-04-13 01:15:31.202534 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202538 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.202542 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.202546 | orchestrator | 2026-04-13 01:15:31.202794 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-13 01:15:31.202805 | orchestrator | Monday 13 April 2026 01:15:02 +0000 (0:00:12.517) 0:04:30.345 ********** 2026-04-13 01:15:31.202809 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202813 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.202817 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.202822 | orchestrator | 2026-04-13 01:15:31.202826 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-13 01:15:31.202836 | orchestrator | Monday 13 April 2026 01:15:08 +0000 (0:00:06.027) 0:04:36.372 ********** 2026-04-13 01:15:31.202840 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.202844 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.202848 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202852 | orchestrator | 2026-04-13 01:15:31.202856 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-13 01:15:31.202860 | orchestrator | Monday 13 April 2026 01:15:17 +0000 (0:00:08.092) 0:04:44.465 ********** 2026-04-13 01:15:31.202864 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:15:31.202868 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:15:31.202872 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:15:31.202876 | orchestrator | 2026-04-13 01:15:31.202880 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:15:31.202885 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 01:15:31.202890 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 01:15:31.202894 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 01:15:31.202898 | orchestrator | 2026-04-13 01:15:31.202909 | orchestrator | 2026-04-13 01:15:31.202913 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:15:31.202917 | orchestrator | Monday 13 April 2026 01:15:27 +0000 (0:00:10.810) 0:04:55.276 ********** 2026-04-13 01:15:31.202929 | orchestrator | =============================================================================== 2026-04-13 01:15:31.202933 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.12s 2026-04-13 01:15:31.202937 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.26s 2026-04-13 01:15:31.202941 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.13s 2026-04-13 01:15:31.202945 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.80s 2026-04-13 01:15:31.202948 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.81s 2026-04-13 01:15:31.202952 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.52s 2026-04-13 01:15:31.202956 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.81s 2026-04-13 01:15:31.202960 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.85s 2026-04-13 01:15:31.202964 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.31s 2026-04-13 01:15:31.202968 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.09s 2026-04-13 01:15:31.202972 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.80s 2026-04-13 01:15:31.202982 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.68s 2026-04-13 01:15:31.202986 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.36s 2026-04-13 01:15:31.202990 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.16s 2026-04-13 01:15:31.202994 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.03s 2026-04-13 01:15:31.202998 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.61s 2026-04-13 01:15:31.203002 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.56s 2026-04-13 01:15:31.203006 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.36s 2026-04-13 01:15:31.203010 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.33s 2026-04-13 01:15:31.203014 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.27s 2026-04-13 01:15:31.203018 | orchestrator | 2026-04-13 01:15:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:34.241809 | orchestrator | 2026-04-13 01:15:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:37.285019 | orchestrator | 2026-04-13 01:15:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:40.327007 | orchestrator | 2026-04-13 01:15:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:43.375099 | orchestrator | 2026-04-13 01:15:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:46.430893 | orchestrator | 2026-04-13 01:15:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:49.474760 | orchestrator | 2026-04-13 01:15:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:52.524276 | orchestrator | 2026-04-13 01:15:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:55.564766 | orchestrator | 2026-04-13 01:15:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:15:58.617976 | orchestrator | 2026-04-13 01:15:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:01.658320 | orchestrator | 2026-04-13 01:16:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:04.704003 | orchestrator | 2026-04-13 01:16:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:07.752760 | orchestrator | 2026-04-13 01:16:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:10.799305 | orchestrator | 2026-04-13 01:16:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:13.856827 | orchestrator | 2026-04-13 01:16:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:16.904077 | orchestrator | 2026-04-13 01:16:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:19.950073 | orchestrator | 2026-04-13 01:16:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:23.000827 | orchestrator | 2026-04-13 01:16:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:26.058806 | orchestrator | 2026-04-13 01:16:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:29.102829 | orchestrator | 2026-04-13 01:16:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-13 01:16:32.145284 | orchestrator | 2026-04-13 01:16:32.353450 | orchestrator | 2026-04-13 01:16:32.360913 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Apr 13 01:16:32 UTC 2026 2026-04-13 01:16:32.361000 | orchestrator | 2026-04-13 01:16:32.770822 | orchestrator | ok: Runtime: 0:35:26.186833 2026-04-13 01:16:33.020568 | 2026-04-13 01:16:33.020762 | TASK [Bootstrap services] 2026-04-13 01:16:33.812285 | orchestrator | 2026-04-13 01:16:33.812486 | orchestrator | # BOOTSTRAP 2026-04-13 01:16:33.812512 | orchestrator | 2026-04-13 01:16:33.812527 | orchestrator | + set -e 2026-04-13 01:16:33.812603 | orchestrator | + echo 2026-04-13 01:16:33.812628 | orchestrator | + echo '# BOOTSTRAP' 2026-04-13 01:16:33.812651 | orchestrator | + echo 2026-04-13 01:16:33.812697 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-13 01:16:33.824283 | orchestrator | + set -e 2026-04-13 01:16:33.824354 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-13 01:16:39.155740 | orchestrator | 2026-04-13 01:16:39 | INFO  | It takes a moment until task d89c4777-5dd2-4b68-a9d4-b93f42512591 (flavor-manager) has been started and output is visible here. 2026-04-13 01:16:50.021411 | orchestrator | 2026-04-13 01:16:44 | INFO  | Flavor SCS-1L-1 created 2026-04-13 01:16:50.021588 | orchestrator | 2026-04-13 01:16:45 | INFO  | Flavor SCS-1L-1-5 created 2026-04-13 01:16:50.021608 | orchestrator | 2026-04-13 01:16:45 | INFO  | Flavor SCS-1V-2 created 2026-04-13 01:16:50.021619 | orchestrator | 2026-04-13 01:16:45 | INFO  | Flavor SCS-1V-2-5 created 2026-04-13 01:16:50.021629 | orchestrator | 2026-04-13 01:16:45 | INFO  | Flavor SCS-1V-4 created 2026-04-13 01:16:50.021638 | orchestrator | 2026-04-13 01:16:45 | INFO  | Flavor SCS-1V-4-10 created 2026-04-13 01:16:50.021647 | orchestrator | 2026-04-13 01:16:45 | INFO  | Flavor SCS-1V-8 created 2026-04-13 01:16:50.021657 | orchestrator | 2026-04-13 01:16:46 | INFO  | Flavor SCS-1V-8-20 created 2026-04-13 01:16:50.021678 | orchestrator | 2026-04-13 01:16:46 | INFO  | Flavor SCS-2V-4 created 2026-04-13 01:16:50.021688 | orchestrator | 2026-04-13 01:16:46 | INFO  | Flavor SCS-2V-4-10 created 2026-04-13 01:16:50.021702 | orchestrator | 2026-04-13 01:16:46 | INFO  | Flavor SCS-2V-8 created 2026-04-13 01:16:50.021716 | orchestrator | 2026-04-13 01:16:46 | INFO  | Flavor SCS-2V-8-20 created 2026-04-13 01:16:50.021731 | orchestrator | 2026-04-13 01:16:47 | INFO  | Flavor SCS-2V-16 created 2026-04-13 01:16:50.021747 | orchestrator | 2026-04-13 01:16:47 | INFO  | Flavor SCS-2V-16-50 created 2026-04-13 01:16:50.021762 | orchestrator | 2026-04-13 01:16:47 | INFO  | Flavor SCS-4V-8 created 2026-04-13 01:16:50.021774 | orchestrator | 2026-04-13 01:16:47 | INFO  | Flavor SCS-4V-8-20 created 2026-04-13 01:16:50.021783 | orchestrator | 2026-04-13 01:16:47 | INFO  | Flavor SCS-4V-16 created 2026-04-13 01:16:50.021792 | orchestrator | 2026-04-13 01:16:47 | INFO  | Flavor SCS-4V-16-50 created 2026-04-13 01:16:50.021812 | orchestrator | 2026-04-13 01:16:48 | INFO  | Flavor SCS-4V-32 created 2026-04-13 01:16:50.021821 | orchestrator | 2026-04-13 01:16:48 | INFO  | Flavor SCS-4V-32-100 created 2026-04-13 01:16:50.021834 | orchestrator | 2026-04-13 01:16:48 | INFO  | Flavor SCS-8V-16 created 2026-04-13 01:16:50.021849 | orchestrator | 2026-04-13 01:16:48 | INFO  | Flavor SCS-8V-16-50 created 2026-04-13 01:16:50.021863 | orchestrator | 2026-04-13 01:16:48 | INFO  | Flavor SCS-8V-32 created 2026-04-13 01:16:50.021878 | orchestrator | 2026-04-13 01:16:48 | INFO  | Flavor SCS-8V-32-100 created 2026-04-13 01:16:50.021892 | orchestrator | 2026-04-13 01:16:48 | INFO  | Flavor SCS-16V-32 created 2026-04-13 01:16:50.021906 | orchestrator | 2026-04-13 01:16:49 | INFO  | Flavor SCS-16V-32-100 created 2026-04-13 01:16:50.021919 | orchestrator | 2026-04-13 01:16:49 | INFO  | Flavor SCS-2V-4-20s created 2026-04-13 01:16:50.021933 | orchestrator | 2026-04-13 01:16:49 | INFO  | Flavor SCS-4V-8-50s created 2026-04-13 01:16:50.021946 | orchestrator | 2026-04-13 01:16:49 | INFO  | Flavor SCS-4V-16-100s created 2026-04-13 01:16:50.021959 | orchestrator | 2026-04-13 01:16:49 | INFO  | Flavor SCS-8V-32-100s created 2026-04-13 01:16:51.670294 | orchestrator | 2026-04-13 01:16:51 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-13 01:17:01.878505 | orchestrator | 2026-04-13 01:17:01 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-13 01:17:01.955944 | orchestrator | 2026-04-13 01:17:01 | INFO  | Task 565913b8-4195-440c-b032-2d638b311397 (bootstrap-basic) was prepared for execution. 2026-04-13 01:17:01.956017 | orchestrator | 2026-04-13 01:17:01 | INFO  | It takes a moment until task 565913b8-4195-440c-b032-2d638b311397 (bootstrap-basic) has been started and output is visible here. 2026-04-13 01:17:52.178089 | orchestrator | 2026-04-13 01:17:52.178232 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-13 01:17:52.178257 | orchestrator | 2026-04-13 01:17:52.178273 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 01:17:52.179059 | orchestrator | Monday 13 April 2026 01:17:05 +0000 (0:00:00.113) 0:00:00.113 ********** 2026-04-13 01:17:52.179091 | orchestrator | ok: [localhost] 2026-04-13 01:17:52.179102 | orchestrator | 2026-04-13 01:17:52.179110 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-13 01:17:52.179119 | orchestrator | Monday 13 April 2026 01:17:07 +0000 (0:00:02.137) 0:00:02.251 ********** 2026-04-13 01:17:52.179130 | orchestrator | ok: [localhost] 2026-04-13 01:17:52.179138 | orchestrator | 2026-04-13 01:17:52.179147 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-13 01:17:52.179155 | orchestrator | Monday 13 April 2026 01:17:18 +0000 (0:00:10.571) 0:00:12.822 ********** 2026-04-13 01:17:52.179164 | orchestrator | changed: [localhost] 2026-04-13 01:17:52.179173 | orchestrator | 2026-04-13 01:17:52.179181 | orchestrator | TASK [Create public network] *************************************************** 2026-04-13 01:17:52.179189 | orchestrator | Monday 13 April 2026 01:17:26 +0000 (0:00:08.217) 0:00:21.040 ********** 2026-04-13 01:17:52.179197 | orchestrator | changed: [localhost] 2026-04-13 01:17:52.179205 | orchestrator | 2026-04-13 01:17:52.179218 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-13 01:17:52.179226 | orchestrator | Monday 13 April 2026 01:17:32 +0000 (0:00:05.685) 0:00:26.726 ********** 2026-04-13 01:17:52.179234 | orchestrator | changed: [localhost] 2026-04-13 01:17:52.179242 | orchestrator | 2026-04-13 01:17:52.179250 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-13 01:17:52.179259 | orchestrator | Monday 13 April 2026 01:17:38 +0000 (0:00:06.844) 0:00:33.571 ********** 2026-04-13 01:17:52.179267 | orchestrator | changed: [localhost] 2026-04-13 01:17:52.179275 | orchestrator | 2026-04-13 01:17:52.179283 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-13 01:17:52.179291 | orchestrator | Monday 13 April 2026 01:17:43 +0000 (0:00:05.011) 0:00:38.582 ********** 2026-04-13 01:17:52.179299 | orchestrator | changed: [localhost] 2026-04-13 01:17:52.179307 | orchestrator | 2026-04-13 01:17:52.179315 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-13 01:17:52.179336 | orchestrator | Monday 13 April 2026 01:17:48 +0000 (0:00:04.170) 0:00:42.753 ********** 2026-04-13 01:17:52.179345 | orchestrator | ok: [localhost] 2026-04-13 01:17:52.179353 | orchestrator | 2026-04-13 01:17:52.179361 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:17:52.179370 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:17:52.179379 | orchestrator | 2026-04-13 01:17:52.179397 | orchestrator | 2026-04-13 01:17:52.179406 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:17:52.179414 | orchestrator | Monday 13 April 2026 01:17:51 +0000 (0:00:03.887) 0:00:46.641 ********** 2026-04-13 01:17:52.179422 | orchestrator | =============================================================================== 2026-04-13 01:17:52.179430 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.57s 2026-04-13 01:17:52.179461 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.22s 2026-04-13 01:17:52.179469 | orchestrator | Set public network to default ------------------------------------------- 6.84s 2026-04-13 01:17:52.179530 | orchestrator | Create public network --------------------------------------------------- 5.69s 2026-04-13 01:17:52.179545 | orchestrator | Create public subnet ---------------------------------------------------- 5.01s 2026-04-13 01:17:52.179558 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.17s 2026-04-13 01:17:52.179571 | orchestrator | Create manager role ----------------------------------------------------- 3.89s 2026-04-13 01:17:52.179580 | orchestrator | Gathering Facts --------------------------------------------------------- 2.14s 2026-04-13 01:17:54.252143 | orchestrator | 2026-04-13 01:17:54 | INFO  | It takes a moment until task f73663dc-3229-4a90-a102-83a22d5511e8 (image-manager) has been started and output is visible here. 2026-04-13 01:18:38.897010 | orchestrator | 2026-04-13 01:17:57 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-13 01:18:38.897125 | orchestrator | 2026-04-13 01:17:57 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-13 01:18:38.897142 | orchestrator | 2026-04-13 01:17:57 | INFO  | Importing image Cirros 0.6.2 2026-04-13 01:18:38.897156 | orchestrator | 2026-04-13 01:17:57 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-13 01:18:38.897168 | orchestrator | 2026-04-13 01:17:59 | INFO  | Waiting for image to leave queued state... 2026-04-13 01:18:38.897180 | orchestrator | 2026-04-13 01:18:01 | INFO  | Waiting for import to complete... 2026-04-13 01:18:38.897191 | orchestrator | 2026-04-13 01:18:12 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-13 01:18:38.897203 | orchestrator | 2026-04-13 01:18:12 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-13 01:18:38.897214 | orchestrator | 2026-04-13 01:18:12 | INFO  | Setting internal_version = 0.6.2 2026-04-13 01:18:38.897226 | orchestrator | 2026-04-13 01:18:12 | INFO  | Setting image_original_user = cirros 2026-04-13 01:18:38.897237 | orchestrator | 2026-04-13 01:18:12 | INFO  | Adding tag os:cirros 2026-04-13 01:18:38.897249 | orchestrator | 2026-04-13 01:18:12 | INFO  | Setting property architecture: x86_64 2026-04-13 01:18:38.897260 | orchestrator | 2026-04-13 01:18:13 | INFO  | Setting property hw_disk_bus: scsi 2026-04-13 01:18:38.897271 | orchestrator | 2026-04-13 01:18:13 | INFO  | Setting property hw_rng_model: virtio 2026-04-13 01:18:38.897282 | orchestrator | 2026-04-13 01:18:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-13 01:18:38.897293 | orchestrator | 2026-04-13 01:18:13 | INFO  | Setting property hw_watchdog_action: reset 2026-04-13 01:18:38.897304 | orchestrator | 2026-04-13 01:18:14 | INFO  | Setting property hypervisor_type: qemu 2026-04-13 01:18:38.897325 | orchestrator | 2026-04-13 01:18:14 | INFO  | Setting property os_distro: cirros 2026-04-13 01:18:38.897337 | orchestrator | 2026-04-13 01:18:14 | INFO  | Setting property os_purpose: minimal 2026-04-13 01:18:38.897348 | orchestrator | 2026-04-13 01:18:15 | INFO  | Setting property replace_frequency: never 2026-04-13 01:18:38.897359 | orchestrator | 2026-04-13 01:18:15 | INFO  | Setting property uuid_validity: none 2026-04-13 01:18:38.897370 | orchestrator | 2026-04-13 01:18:15 | INFO  | Setting property provided_until: none 2026-04-13 01:18:38.897381 | orchestrator | 2026-04-13 01:18:15 | INFO  | Setting property image_description: Cirros 2026-04-13 01:18:38.897393 | orchestrator | 2026-04-13 01:18:16 | INFO  | Setting property image_name: Cirros 2026-04-13 01:18:38.897428 | orchestrator | 2026-04-13 01:18:16 | INFO  | Setting property internal_version: 0.6.2 2026-04-13 01:18:38.897439 | orchestrator | 2026-04-13 01:18:16 | INFO  | Setting property image_original_user: cirros 2026-04-13 01:18:38.897490 | orchestrator | 2026-04-13 01:18:16 | INFO  | Setting property os_version: 0.6.2 2026-04-13 01:18:38.897511 | orchestrator | 2026-04-13 01:18:17 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-13 01:18:38.897534 | orchestrator | 2026-04-13 01:18:17 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-13 01:18:38.897554 | orchestrator | 2026-04-13 01:18:17 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-13 01:18:38.897569 | orchestrator | 2026-04-13 01:18:17 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-13 01:18:38.897587 | orchestrator | 2026-04-13 01:18:17 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-13 01:18:38.897600 | orchestrator | 2026-04-13 01:18:18 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-13 01:18:38.897613 | orchestrator | 2026-04-13 01:18:18 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-13 01:18:38.897626 | orchestrator | 2026-04-13 01:18:18 | INFO  | Importing image Cirros 0.6.3 2026-04-13 01:18:38.897638 | orchestrator | 2026-04-13 01:18:18 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-13 01:18:38.897651 | orchestrator | 2026-04-13 01:18:19 | INFO  | Waiting for image to leave queued state... 2026-04-13 01:18:38.897664 | orchestrator | 2026-04-13 01:18:21 | INFO  | Waiting for import to complete... 2026-04-13 01:18:38.897695 | orchestrator | 2026-04-13 01:18:32 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-13 01:18:38.897708 | orchestrator | 2026-04-13 01:18:32 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-13 01:18:38.897720 | orchestrator | 2026-04-13 01:18:32 | INFO  | Setting internal_version = 0.6.3 2026-04-13 01:18:38.897733 | orchestrator | 2026-04-13 01:18:32 | INFO  | Setting image_original_user = cirros 2026-04-13 01:18:38.897745 | orchestrator | 2026-04-13 01:18:32 | INFO  | Adding tag os:cirros 2026-04-13 01:18:38.897757 | orchestrator | 2026-04-13 01:18:32 | INFO  | Setting property architecture: x86_64 2026-04-13 01:18:38.897769 | orchestrator | 2026-04-13 01:18:33 | INFO  | Setting property hw_disk_bus: scsi 2026-04-13 01:18:38.897782 | orchestrator | 2026-04-13 01:18:33 | INFO  | Setting property hw_rng_model: virtio 2026-04-13 01:18:38.897794 | orchestrator | 2026-04-13 01:18:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-13 01:18:38.897807 | orchestrator | 2026-04-13 01:18:33 | INFO  | Setting property hw_watchdog_action: reset 2026-04-13 01:18:38.897819 | orchestrator | 2026-04-13 01:18:34 | INFO  | Setting property hypervisor_type: qemu 2026-04-13 01:18:38.897831 | orchestrator | 2026-04-13 01:18:34 | INFO  | Setting property os_distro: cirros 2026-04-13 01:18:38.897843 | orchestrator | 2026-04-13 01:18:34 | INFO  | Setting property os_purpose: minimal 2026-04-13 01:18:38.897856 | orchestrator | 2026-04-13 01:18:34 | INFO  | Setting property replace_frequency: never 2026-04-13 01:18:38.897869 | orchestrator | 2026-04-13 01:18:35 | INFO  | Setting property uuid_validity: none 2026-04-13 01:18:38.897881 | orchestrator | 2026-04-13 01:18:35 | INFO  | Setting property provided_until: none 2026-04-13 01:18:38.897892 | orchestrator | 2026-04-13 01:18:35 | INFO  | Setting property image_description: Cirros 2026-04-13 01:18:38.897912 | orchestrator | 2026-04-13 01:18:35 | INFO  | Setting property image_name: Cirros 2026-04-13 01:18:38.897923 | orchestrator | 2026-04-13 01:18:36 | INFO  | Setting property internal_version: 0.6.3 2026-04-13 01:18:38.897934 | orchestrator | 2026-04-13 01:18:36 | INFO  | Setting property image_original_user: cirros 2026-04-13 01:18:38.897945 | orchestrator | 2026-04-13 01:18:36 | INFO  | Setting property os_version: 0.6.3 2026-04-13 01:18:38.897956 | orchestrator | 2026-04-13 01:18:37 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-13 01:18:38.897968 | orchestrator | 2026-04-13 01:18:37 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-13 01:18:38.897979 | orchestrator | 2026-04-13 01:18:37 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-13 01:18:38.897989 | orchestrator | 2026-04-13 01:18:37 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-13 01:18:38.898001 | orchestrator | 2026-04-13 01:18:37 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-13 01:18:39.172310 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-13 01:18:41.188251 | orchestrator | 2026-04-13 01:18:41 | INFO  | date: 2026-04-12 2026-04-13 01:18:41.188374 | orchestrator | 2026-04-13 01:18:41 | INFO  | image: octavia-amphora-haproxy-2024.2.20260412.qcow2 2026-04-13 01:18:41.188741 | orchestrator | 2026-04-13 01:18:41 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260412.qcow2 2026-04-13 01:18:41.188842 | orchestrator | 2026-04-13 01:18:41 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260412.qcow2.CHECKSUM 2026-04-13 01:18:41.378553 | orchestrator | 2026-04-13 01:18:41 | INFO  | checksum: 7f0e44efa7050ce6d00e66aac356fa200966b78f2d88431e0e31b98d52f6c867 2026-04-13 01:18:41.465280 | orchestrator | 2026-04-13 01:18:41 | INFO  | It takes a moment until task 97061512-e494-4e6b-989b-507dade4415b (image-manager) has been started and output is visible here. 2026-04-13 01:20:15.008030 | orchestrator | 2026-04-13 01:18:43 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-12' 2026-04-13 01:20:15.008113 | orchestrator | 2026-04-13 01:18:43 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260412.qcow2: 200 2026-04-13 01:20:15.008122 | orchestrator | 2026-04-13 01:18:43 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-12 2026-04-13 01:20:15.008128 | orchestrator | 2026-04-13 01:18:43 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260412.qcow2 2026-04-13 01:20:15.008134 | orchestrator | 2026-04-13 01:18:45 | INFO  | Waiting for image to leave queued state... 2026-04-13 01:20:15.008140 | orchestrator | 2026-04-13 01:18:47 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008145 | orchestrator | 2026-04-13 01:18:57 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008149 | orchestrator | 2026-04-13 01:19:07 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008155 | orchestrator | 2026-04-13 01:19:18 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008161 | orchestrator | 2026-04-13 01:19:28 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008166 | orchestrator | 2026-04-13 01:19:38 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008171 | orchestrator | 2026-04-13 01:19:48 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008192 | orchestrator | 2026-04-13 01:19:58 | INFO  | Waiting for import to complete... 2026-04-13 01:20:15.008197 | orchestrator | 2026-04-13 01:20:08 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-12' successfully completed, reloading images 2026-04-13 01:20:15.008203 | orchestrator | 2026-04-13 01:20:09 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-12' 2026-04-13 01:20:15.008208 | orchestrator | 2026-04-13 01:20:09 | INFO  | Setting internal_version = 2026-04-12 2026-04-13 01:20:15.008213 | orchestrator | 2026-04-13 01:20:09 | INFO  | Setting image_original_user = ubuntu 2026-04-13 01:20:15.008218 | orchestrator | 2026-04-13 01:20:09 | INFO  | Adding tag amphora 2026-04-13 01:20:15.008223 | orchestrator | 2026-04-13 01:20:09 | INFO  | Adding tag os:ubuntu 2026-04-13 01:20:15.008227 | orchestrator | 2026-04-13 01:20:09 | INFO  | Setting property architecture: x86_64 2026-04-13 01:20:15.008232 | orchestrator | 2026-04-13 01:20:09 | INFO  | Setting property hw_disk_bus: scsi 2026-04-13 01:20:15.008236 | orchestrator | 2026-04-13 01:20:10 | INFO  | Setting property hw_rng_model: virtio 2026-04-13 01:20:15.008241 | orchestrator | 2026-04-13 01:20:10 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-13 01:20:15.008246 | orchestrator | 2026-04-13 01:20:10 | INFO  | Setting property hw_watchdog_action: reset 2026-04-13 01:20:15.008251 | orchestrator | 2026-04-13 01:20:11 | INFO  | Setting property hypervisor_type: qemu 2026-04-13 01:20:15.008255 | orchestrator | 2026-04-13 01:20:11 | INFO  | Setting property os_distro: ubuntu 2026-04-13 01:20:15.008260 | orchestrator | 2026-04-13 01:20:11 | INFO  | Setting property replace_frequency: quarterly 2026-04-13 01:20:15.008265 | orchestrator | 2026-04-13 01:20:11 | INFO  | Setting property uuid_validity: last-1 2026-04-13 01:20:15.008270 | orchestrator | 2026-04-13 01:20:12 | INFO  | Setting property provided_until: none 2026-04-13 01:20:15.008286 | orchestrator | 2026-04-13 01:20:12 | INFO  | Setting property os_purpose: network 2026-04-13 01:20:15.008290 | orchestrator | 2026-04-13 01:20:12 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-13 01:20:15.008295 | orchestrator | 2026-04-13 01:20:12 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-13 01:20:15.008300 | orchestrator | 2026-04-13 01:20:13 | INFO  | Setting property internal_version: 2026-04-12 2026-04-13 01:20:15.008305 | orchestrator | 2026-04-13 01:20:13 | INFO  | Setting property image_original_user: ubuntu 2026-04-13 01:20:15.008309 | orchestrator | 2026-04-13 01:20:13 | INFO  | Setting property os_version: 2026-04-12 2026-04-13 01:20:15.008314 | orchestrator | 2026-04-13 01:20:13 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260412.qcow2 2026-04-13 01:20:15.008319 | orchestrator | 2026-04-13 01:20:14 | INFO  | Setting property image_build_date: 2026-04-12 2026-04-13 01:20:15.008336 | orchestrator | 2026-04-13 01:20:14 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-12' 2026-04-13 01:20:15.008341 | orchestrator | 2026-04-13 01:20:14 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-12' 2026-04-13 01:20:15.008345 | orchestrator | 2026-04-13 01:20:14 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-13 01:20:15.008350 | orchestrator | 2026-04-13 01:20:14 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-13 01:20:15.008355 | orchestrator | 2026-04-13 01:20:14 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-13 01:20:15.008364 | orchestrator | 2026-04-13 01:20:14 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-13 01:20:15.714466 | orchestrator | ok: Runtime: 0:03:41.887209 2026-04-13 01:20:15.742062 | 2026-04-13 01:20:15.742271 | TASK [Run checks] 2026-04-13 01:20:16.474065 | orchestrator | + set -e 2026-04-13 01:20:16.474271 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 01:20:16.474294 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 01:20:16.474313 | orchestrator | ++ INTERACTIVE=false 2026-04-13 01:20:16.474325 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 01:20:16.474336 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 01:20:16.474349 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-13 01:20:16.475402 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-13 01:20:16.483870 | orchestrator | 2026-04-13 01:20:16.483970 | orchestrator | # CHECK 2026-04-13 01:20:16.483985 | orchestrator | 2026-04-13 01:20:16.483998 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 01:20:16.484015 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 01:20:16.484026 | orchestrator | + echo 2026-04-13 01:20:16.484038 | orchestrator | + echo '# CHECK' 2026-04-13 01:20:16.484048 | orchestrator | + echo 2026-04-13 01:20:16.484064 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-13 01:20:16.485607 | orchestrator | ++ semver latest 5.0.0 2026-04-13 01:20:16.546616 | orchestrator | 2026-04-13 01:20:16.546713 | orchestrator | ## Containers @ testbed-manager 2026-04-13 01:20:16.546727 | orchestrator | 2026-04-13 01:20:16.546751 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-13 01:20:16.546762 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 01:20:16.546772 | orchestrator | + echo 2026-04-13 01:20:16.546783 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-13 01:20:16.546794 | orchestrator | + echo 2026-04-13 01:20:16.546804 | orchestrator | + osism container testbed-manager ps 2026-04-13 01:20:17.735073 | orchestrator | 2026-04-13 01:20:17 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-13 01:20:18.161527 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-13 01:20:18.161705 | orchestrator | 1ae5b2e5c6bd registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_blackbox_exporter 2026-04-13 01:20:18.161746 | orchestrator | 5004fd3f620b registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2026-04-13 01:20:18.161758 | orchestrator | 60261f2aed84 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-13 01:20:18.161775 | orchestrator | 854541b9b83f registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-13 01:20:18.161791 | orchestrator | 45ee666181b9 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2026-04-13 01:20:18.161802 | orchestrator | c64536fdcd28 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 19 minutes ago Up 18 minutes cephclient 2026-04-13 01:20:18.161812 | orchestrator | aba52cb0f004 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-13 01:20:18.161823 | orchestrator | 96a1230125d3 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-13 01:20:18.161859 | orchestrator | 3ee3e9a76c8a registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-13 01:20:18.161869 | orchestrator | c829cf47c183 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2026-04-13 01:20:18.161879 | orchestrator | 12d03bac51de registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2026-04-13 01:20:18.161889 | orchestrator | b5321de2f45d registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 33 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2026-04-13 01:20:18.161899 | orchestrator | 1cb428abdf7f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-13 01:20:18.161909 | orchestrator | 3e611ae41d91 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2026-04-13 01:20:18.161919 | orchestrator | 96fe7ee229f2 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2026-04-13 01:20:18.161956 | orchestrator | 0ae78dcca4b2 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2026-04-13 01:20:18.161974 | orchestrator | 97996935c832 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2026-04-13 01:20:18.161990 | orchestrator | 4e2523ec41ed registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2026-04-13 01:20:18.162006 | orchestrator | b10a7b7e57f7 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-13 01:20:18.162075 | orchestrator | e822d936150b registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2026-04-13 01:20:18.162094 | orchestrator | 5e479faa82f2 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-13 01:20:18.162113 | orchestrator | 29211eb04828 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2026-04-13 01:20:18.162130 | orchestrator | 2178b3e31550 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2026-04-13 01:20:18.162161 | orchestrator | 8ea9b2322396 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2026-04-13 01:20:18.162179 | orchestrator | 3a22307fca8a registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-13 01:20:18.162189 | orchestrator | d6616ba28569 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 40 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-13 01:20:18.162199 | orchestrator | fe909740906d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2026-04-13 01:20:18.162209 | orchestrator | 96579ee73222 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2026-04-13 01:20:18.162219 | orchestrator | 69eec2bbf7de registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-13 01:20:18.321282 | orchestrator | 2026-04-13 01:20:18.321371 | orchestrator | ## Images @ testbed-manager 2026-04-13 01:20:18.321382 | orchestrator | 2026-04-13 01:20:18.321389 | orchestrator | + echo 2026-04-13 01:20:18.321396 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-13 01:20:18.321422 | orchestrator | + echo 2026-04-13 01:20:18.321441 | orchestrator | + osism container testbed-manager images 2026-04-13 01:20:19.830464 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-13 01:20:19.830570 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 7e37843ae64a About an hour ago 636MB 2026-04-13 01:20:19.830585 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 13c74a2ba7b4 About an hour ago 585MB 2026-04-13 01:20:19.830597 | orchestrator | registry.osism.tech/osism/osism-ansible latest 642f5b5d42b1 About an hour ago 638MB 2026-04-13 01:20:19.830608 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest c30c9c7674aa About an hour ago 1.24GB 2026-04-13 01:20:19.830618 | orchestrator | registry.osism.tech/osism/osism latest 0dbe91ffcef1 About an hour ago 408MB 2026-04-13 01:20:19.830629 | orchestrator | registry.osism.tech/osism/osism-frontend latest cb123771106c About an hour ago 212MB 2026-04-13 01:20:19.830640 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 2a8dd0068f1a About an hour ago 357MB 2026-04-13 01:20:19.830651 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 ae75e9ef1b08 21 hours ago 246MB 2026-04-13 01:20:19.830662 | orchestrator | registry.osism.tech/osism/cephclient reef bf5aa2ba6b2b 21 hours ago 453MB 2026-04-13 01:20:19.830673 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 72b35c3a08d6 47 hours ago 587MB 2026-04-13 01:20:19.830684 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e80460bcbd2b 47 hours ago 675MB 2026-04-13 01:20:19.830695 | orchestrator | registry.osism.tech/kolla/cron 2024.2 ba2465d5505f 47 hours ago 273MB 2026-04-13 01:20:19.830705 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 19c466868f78 47 hours ago 316MB 2026-04-13 01:20:19.830737 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 ee52a516281f 47 hours ago 365MB 2026-04-13 01:20:19.830748 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 a0e0fa1b465d 47 hours ago 411MB 2026-04-13 01:20:19.830758 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ae27083b15a5 47 hours ago 313MB 2026-04-13 01:20:19.830769 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 6c981c780f3f 47 hours ago 847MB 2026-04-13 01:20:19.830780 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-13 01:20:19.830790 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-13 01:20:19.830801 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 5 months ago 334MB 2026-04-13 01:20:19.830812 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-13 01:20:19.830823 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-13 01:20:19.830833 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-13 01:20:19.830844 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-13 01:20:19.982958 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-13 01:20:19.984296 | orchestrator | ++ semver latest 5.0.0 2026-04-13 01:20:20.043475 | orchestrator | 2026-04-13 01:20:20.043580 | orchestrator | ## Containers @ testbed-node-0 2026-04-13 01:20:20.043597 | orchestrator | 2026-04-13 01:20:20.043611 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-13 01:20:20.043624 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 01:20:20.043652 | orchestrator | + echo 2026-04-13 01:20:20.043675 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-13 01:20:20.043688 | orchestrator | + echo 2026-04-13 01:20:20.043701 | orchestrator | + osism container testbed-node-0 ps 2026-04-13 01:20:21.658565 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-13 01:20:21.658680 | orchestrator | 30ac75f6e900 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-04-13 01:20:21.658698 | orchestrator | 30576b5f3a06 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-04-13 01:20:21.658711 | orchestrator | d31c9e958de3 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-04-13 01:20:21.658740 | orchestrator | 8fee3673177c registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-04-13 01:20:21.658751 | orchestrator | 5b08971da266 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-13 01:20:21.658763 | orchestrator | 657dd268079e registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-04-13 01:20:21.658774 | orchestrator | f103f5b341c1 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-04-13 01:20:21.658785 | orchestrator | 5077f223eccc registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-04-13 01:20:21.658818 | orchestrator | 0dd9a63fa68c registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-04-13 01:20:21.658829 | orchestrator | f7e1d7d48b07 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2026-04-13 01:20:21.658840 | orchestrator | 7ac0dfae9c84 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2026-04-13 01:20:21.658851 | orchestrator | 7dbf1a5e6c21 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-04-13 01:20:21.658862 | orchestrator | 3a0169f4349d registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-13 01:20:21.658873 | orchestrator | 80fad9c263f0 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-13 01:20:21.658884 | orchestrator | da0b767dfbb5 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-13 01:20:21.658894 | orchestrator | ab6ffaa0ad4e registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-13 01:20:21.658905 | orchestrator | 6e334b6835c6 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2026-04-13 01:20:21.658916 | orchestrator | a80d2c2c42ac registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-04-13 01:20:21.658926 | orchestrator | 5065b28d8846 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2026-04-13 01:20:21.658937 | orchestrator | 8c902d0bda2c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2026-04-13 01:20:21.658948 | orchestrator | f649f781ef06 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2026-04-13 01:20:21.658978 | orchestrator | 226379b80284 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-04-13 01:20:21.658996 | orchestrator | b68c8b92dcc7 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-04-13 01:20:21.659007 | orchestrator | 2731930f17af registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-13 01:20:21.659018 | orchestrator | 099b884f5b92 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_volume 2026-04-13 01:20:21.659035 | orchestrator | f327adb8dc0a registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2026-04-13 01:20:21.659046 | orchestrator | 36e70478dd24 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2026-04-13 01:20:21.659057 | orchestrator | cea28dcea574 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-04-13 01:20:21.659077 | orchestrator | 37252dbcd115 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-04-13 01:20:21.659089 | orchestrator | e8246d36d1a7 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-13 01:20:21.659100 | orchestrator | 2d6854f77a6b registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-04-13 01:20:21.659110 | orchestrator | ce154fbcb125 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2026-04-13 01:20:21.659121 | orchestrator | 9c812aa82b1e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-13 01:20:21.659132 | orchestrator | 1d956b8f1aad registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2026-04-13 01:20:21.659143 | orchestrator | 9806ebb6a939 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-13 01:20:21.659154 | orchestrator | 1d8db7a16dcf registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-04-13 01:20:21.659165 | orchestrator | 8971c49c6167 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-04-13 01:20:21.659176 | orchestrator | 02826e64c3b6 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-04-13 01:20:21.659186 | orchestrator | a5a9079e8778 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-04-13 01:20:21.659197 | orchestrator | b9a3013e1133 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-04-13 01:20:21.659208 | orchestrator | f0ee9053b7d8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-04-13 01:20:21.659219 | orchestrator | 9b73c995982e registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-04-13 01:20:21.659230 | orchestrator | 2067082093e8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2026-04-13 01:20:21.659241 | orchestrator | defb9c59fb6e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-04-13 01:20:21.659260 | orchestrator | 4c0f99c639cb registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-13 01:20:21.659271 | orchestrator | 16f844ceac01 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-13 01:20:21.659287 | orchestrator | b374c5769601 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-04-13 01:20:21.659305 | orchestrator | 46af5224c89e registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-04-13 01:20:21.659316 | orchestrator | ffc1ff2912d1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-04-13 01:20:21.659326 | orchestrator | b0177cc77cde registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-04-13 01:20:21.659337 | orchestrator | a417d98a3fbe registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-04-13 01:20:21.659348 | orchestrator | 610b18322d18 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-04-13 01:20:21.659359 | orchestrator | e26ac8422e1c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-13 01:20:21.659369 | orchestrator | bd7abd8b59fa registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-13 01:20:21.659380 | orchestrator | a7868ae829b2 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-13 01:20:21.659391 | orchestrator | 38d160657d5c registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-04-13 01:20:21.659402 | orchestrator | e0112e571b5e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-13 01:20:21.659457 | orchestrator | acb7b83f1c9b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-13 01:20:21.659468 | orchestrator | 673b24c459c7 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-13 01:20:21.807770 | orchestrator | 2026-04-13 01:20:21.807868 | orchestrator | ## Images @ testbed-node-0 2026-04-13 01:20:21.807883 | orchestrator | 2026-04-13 01:20:21.807894 | orchestrator | + echo 2026-04-13 01:20:21.807904 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-13 01:20:21.807915 | orchestrator | + echo 2026-04-13 01:20:21.807925 | orchestrator | + osism container testbed-node-0 images 2026-04-13 01:20:23.388674 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-13 01:20:23.388783 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 851c6241fc0f 21 hours ago 1.35GB 2026-04-13 01:20:23.388820 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 72b35c3a08d6 47 hours ago 587MB 2026-04-13 01:20:23.388833 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e80460bcbd2b 47 hours ago 675MB 2026-04-13 01:20:23.388844 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3502772e2bf5 47 hours ago 1.04GB 2026-04-13 01:20:23.388855 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 8cd075b4e71f 47 hours ago 330MB 2026-04-13 01:20:23.388866 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 bd480c64251d 47 hours ago 284MB 2026-04-13 01:20:23.388878 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 396ad59e4bd6 47 hours ago 419MB 2026-04-13 01:20:23.388889 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 36c40086ca8c 47 hours ago 274MB 2026-04-13 01:20:23.388900 | orchestrator | registry.osism.tech/kolla/cron 2024.2 ba2465d5505f 47 hours ago 273MB 2026-04-13 01:20:23.388932 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 88d5b15e75e6 47 hours ago 282MB 2026-04-13 01:20:23.388943 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6252cd5169df 47 hours ago 281MB 2026-04-13 01:20:23.388954 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9c8136b35f55 47 hours ago 280MB 2026-04-13 01:20:23.388965 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 cff1d041dbb4 47 hours ago 287MB 2026-04-13 01:20:23.388976 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 11025aea9b95 47 hours ago 287MB 2026-04-13 01:20:23.388987 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 cbd6f258faaa 47 hours ago 460MB 2026-04-13 01:20:23.388998 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ac5740c70814 47 hours ago 1.16GB 2026-04-13 01:20:23.389009 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 ee52a516281f 47 hours ago 365MB 2026-04-13 01:20:23.389020 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 6b04e3658031 47 hours ago 309MB 2026-04-13 01:20:23.389031 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ae27083b15a5 47 hours ago 313MB 2026-04-13 01:20:23.389042 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 98d3b87ed02b 47 hours ago 306MB 2026-04-13 01:20:23.389053 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8c80eb88a699 47 hours ago 299MB 2026-04-13 01:20:23.389064 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 edef39569dfc 47 hours ago 848MB 2026-04-13 01:20:23.389075 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ee5f7cb9bdcc 47 hours ago 848MB 2026-04-13 01:20:23.389086 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 2d509010657e 47 hours ago 848MB 2026-04-13 01:20:23.389097 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 df1e056a42f9 47 hours ago 848MB 2026-04-13 01:20:23.389108 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 bc46bff56fc8 47 hours ago 997MB 2026-04-13 01:20:23.389119 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 32f86879d566 47 hours ago 992MB 2026-04-13 01:20:23.389130 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 aba37cbf5a3a 47 hours ago 997MB 2026-04-13 01:20:23.389141 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 9d0de4928149 47 hours ago 992MB 2026-04-13 01:20:23.389152 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 4aa498e7d17f 47 hours ago 992MB 2026-04-13 01:20:23.389163 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 757006639529 47 hours ago 991MB 2026-04-13 01:20:23.389174 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 13d7ea177ef7 47 hours ago 1.05GB 2026-04-13 01:20:23.389185 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 9f4b643f7ebb 47 hours ago 1.07GB 2026-04-13 01:20:23.389196 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 af3e22d4b8ee 47 hours ago 1.04GB 2026-04-13 01:20:23.389207 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 7fa98e32b1a4 47 hours ago 1.05GB 2026-04-13 01:20:23.389218 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 3d59a31c5300 47 hours ago 997MB 2026-04-13 01:20:23.389255 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c60dba3e519e 47 hours ago 983MB 2026-04-13 01:20:23.389267 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46009656346c 47 hours ago 1.38GB 2026-04-13 01:20:23.389278 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 08cf652880f2 47 hours ago 1.22GB 2026-04-13 01:20:23.389297 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 8ced8e39e0e2 47 hours ago 1.22GB 2026-04-13 01:20:23.389308 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 01b9d4777aad 47 hours ago 1.22GB 2026-04-13 01:20:23.389319 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 279248ef27f3 47 hours ago 999MB 2026-04-13 01:20:23.389330 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 aabb20fb09c2 47 hours ago 998MB 2026-04-13 01:20:23.389341 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 397971e07077 47 hours ago 999MB 2026-04-13 01:20:23.389352 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b634398cfda7 47 hours ago 1.14GB 2026-04-13 01:20:23.389363 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5f3220cb8117 47 hours ago 1.25GB 2026-04-13 01:20:23.389374 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c6bb595625ea 47 hours ago 1.17GB 2026-04-13 01:20:23.389385 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8bd3d0213ed2 47 hours ago 1.41GB 2026-04-13 01:20:23.389396 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 74f27783512a 47 hours ago 1.73GB 2026-04-13 01:20:23.389448 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 25b987de35dd 47 hours ago 1.42GB 2026-04-13 01:20:23.389460 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 649ad89d5991 47 hours ago 1.41GB 2026-04-13 01:20:23.389471 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 2620ba966839 47 hours ago 983MB 2026-04-13 01:20:23.389482 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 4387c4339c3e 47 hours ago 984MB 2026-04-13 01:20:23.389493 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 150d1918ab5c 47 hours ago 1.11GB 2026-04-13 01:20:23.389504 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 dfb350742fdd 47 hours ago 1.04GB 2026-04-13 01:20:23.389515 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c027424b9687 47 hours ago 1.04GB 2026-04-13 01:20:23.389526 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 6479ec65eecd 47 hours ago 1.06GB 2026-04-13 01:20:23.389536 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 90d329a4c7eb 47 hours ago 1.06GB 2026-04-13 01:20:23.389553 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 51041227b1ae 47 hours ago 1.04GB 2026-04-13 01:20:23.389564 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 753916b2611f 47 hours ago 981MB 2026-04-13 01:20:23.389576 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 51f53b6fa850 47 hours ago 982MB 2026-04-13 01:20:23.389587 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 7d0f7097031f 47 hours ago 982MB 2026-04-13 01:20:23.389598 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 956fd4f6452c 47 hours ago 982MB 2026-04-13 01:20:23.389609 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 fe1bc4818a44 2 days ago 1.54GB 2026-04-13 01:20:23.389620 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 4 days ago 1.56GB 2026-04-13 01:20:23.553526 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-13 01:20:23.553798 | orchestrator | ++ semver latest 5.0.0 2026-04-13 01:20:23.604806 | orchestrator | 2026-04-13 01:20:23.604897 | orchestrator | ## Containers @ testbed-node-1 2026-04-13 01:20:23.604915 | orchestrator | 2026-04-13 01:20:23.604929 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-13 01:20:23.604946 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 01:20:23.604967 | orchestrator | + echo 2026-04-13 01:20:23.605023 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-13 01:20:23.605046 | orchestrator | + echo 2026-04-13 01:20:23.605066 | orchestrator | + osism container testbed-node-1 ps 2026-04-13 01:20:25.168513 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-13 01:20:25.168622 | orchestrator | b4078edc768c registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-04-13 01:20:25.168640 | orchestrator | d1adbab49cdb registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-04-13 01:20:25.168652 | orchestrator | 9102794da5e6 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-04-13 01:20:25.168663 | orchestrator | 748e7747336c registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-04-13 01:20:25.168688 | orchestrator | 7b70fa5e6a69 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-13 01:20:25.168710 | orchestrator | 8a2e93409543 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-13 01:20:25.168721 | orchestrator | a250ff9530eb registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-04-13 01:20:25.168732 | orchestrator | 1834c7e78309 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-04-13 01:20:25.168748 | orchestrator | 0d88b9ab0408 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-04-13 01:20:25.168759 | orchestrator | db1d104b3dcc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2026-04-13 01:20:25.168770 | orchestrator | 97a98ec54c8c registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2026-04-13 01:20:25.168781 | orchestrator | e4b5c0400a2e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-04-13 01:20:25.168792 | orchestrator | 49d97db5d8c8 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-13 01:20:25.168803 | orchestrator | 35696fc2d6bd registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-13 01:20:25.168833 | orchestrator | 14760fd03c51 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-13 01:20:25.168844 | orchestrator | 4754e33647b8 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-13 01:20:25.168855 | orchestrator | e0c5bbc95a35 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2026-04-13 01:20:25.168866 | orchestrator | 320af4d5a4cb registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-04-13 01:20:25.168898 | orchestrator | e53143bfdcbe registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2026-04-13 01:20:25.168910 | orchestrator | 9fe43c27abdb registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2026-04-13 01:20:25.168921 | orchestrator | 54147567007c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2026-04-13 01:20:25.168951 | orchestrator | 3ff9828906a8 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-04-13 01:20:25.168969 | orchestrator | 501527d6b5dc registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-04-13 01:20:25.168987 | orchestrator | cb0e250663ed registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-13 01:20:25.168998 | orchestrator | 246f27faa293 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-13 01:20:25.169009 | orchestrator | 757e09447edf registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2026-04-13 01:20:25.169020 | orchestrator | 2a1703ccba47 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2026-04-13 01:20:25.169031 | orchestrator | 514057ddfae2 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-04-13 01:20:25.169041 | orchestrator | cc70085d9bb0 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-04-13 01:20:25.169052 | orchestrator | 61e8c58a6621 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-13 01:20:25.169063 | orchestrator | d263e1245b43 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-04-13 01:20:25.169074 | orchestrator | eef9fa7df05d registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2026-04-13 01:20:25.169084 | orchestrator | d503f783dfd5 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-13 01:20:25.169095 | orchestrator | 052168337f20 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2026-04-13 01:20:25.169106 | orchestrator | 01f91a6f0ecd registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-13 01:20:25.169116 | orchestrator | 17398686a8e1 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-04-13 01:20:25.169127 | orchestrator | e87c7b35b83a registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-04-13 01:20:25.169144 | orchestrator | 3a6a4c8f9443 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-04-13 01:20:25.169163 | orchestrator | 38cc362fb791 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-04-13 01:20:25.169174 | orchestrator | a8eb52112b9d registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-04-13 01:20:25.169185 | orchestrator | c5f4346a8519 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-04-13 01:20:25.169196 | orchestrator | b4a7708de98f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-04-13 01:20:25.169207 | orchestrator | db97b4fd963b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2026-04-13 01:20:25.169218 | orchestrator | 4cc6ccabfcff registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-04-13 01:20:25.169237 | orchestrator | c0d20993c020 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-13 01:20:25.169248 | orchestrator | 8bb58f04a429 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-13 01:20:25.169259 | orchestrator | ac394c494ad2 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-04-13 01:20:25.169270 | orchestrator | 0554e6c66093 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2026-04-13 01:20:25.169281 | orchestrator | 3a78e072d8cd registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-04-13 01:20:25.169292 | orchestrator | 683b1a67c00a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-04-13 01:20:25.169302 | orchestrator | b29266bf7816 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-04-13 01:20:25.169313 | orchestrator | 2025ef77341b registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-04-13 01:20:25.169324 | orchestrator | 11cfb513e772 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-13 01:20:25.169335 | orchestrator | a3241bff1da2 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-13 01:20:25.169346 | orchestrator | 6a8a9bf6ae98 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-13 01:20:25.169356 | orchestrator | 797bcd401ad8 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-04-13 01:20:25.169367 | orchestrator | ff97f4985c0d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-13 01:20:25.169378 | orchestrator | 8c53c9a22daf registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-13 01:20:25.169396 | orchestrator | 928c05d4e9af registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-13 01:20:25.324134 | orchestrator | 2026-04-13 01:20:25.324236 | orchestrator | ## Images @ testbed-node-1 2026-04-13 01:20:25.324250 | orchestrator | 2026-04-13 01:20:25.324260 | orchestrator | + echo 2026-04-13 01:20:25.324269 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-13 01:20:25.324278 | orchestrator | + echo 2026-04-13 01:20:25.324286 | orchestrator | + osism container testbed-node-1 images 2026-04-13 01:20:26.839206 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-13 01:20:26.839307 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 851c6241fc0f 21 hours ago 1.35GB 2026-04-13 01:20:26.839319 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 72b35c3a08d6 47 hours ago 587MB 2026-04-13 01:20:26.839329 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e80460bcbd2b 47 hours ago 675MB 2026-04-13 01:20:26.839339 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3502772e2bf5 47 hours ago 1.04GB 2026-04-13 01:20:26.839348 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 8cd075b4e71f 47 hours ago 330MB 2026-04-13 01:20:26.839357 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 bd480c64251d 47 hours ago 284MB 2026-04-13 01:20:26.839383 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 396ad59e4bd6 47 hours ago 419MB 2026-04-13 01:20:26.839392 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 36c40086ca8c 47 hours ago 274MB 2026-04-13 01:20:26.839444 | orchestrator | registry.osism.tech/kolla/cron 2024.2 ba2465d5505f 47 hours ago 273MB 2026-04-13 01:20:26.839455 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 88d5b15e75e6 47 hours ago 282MB 2026-04-13 01:20:26.839468 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6252cd5169df 47 hours ago 281MB 2026-04-13 01:20:26.839477 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9c8136b35f55 47 hours ago 280MB 2026-04-13 01:20:26.839486 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 cff1d041dbb4 47 hours ago 287MB 2026-04-13 01:20:26.839494 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 11025aea9b95 47 hours ago 287MB 2026-04-13 01:20:26.839503 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 cbd6f258faaa 47 hours ago 460MB 2026-04-13 01:20:26.839512 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ac5740c70814 47 hours ago 1.16GB 2026-04-13 01:20:26.839521 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 ee52a516281f 47 hours ago 365MB 2026-04-13 01:20:26.839529 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 6b04e3658031 47 hours ago 309MB 2026-04-13 01:20:26.839538 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ae27083b15a5 47 hours ago 313MB 2026-04-13 01:20:26.839546 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 98d3b87ed02b 47 hours ago 306MB 2026-04-13 01:20:26.839555 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8c80eb88a699 47 hours ago 299MB 2026-04-13 01:20:26.839564 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 edef39569dfc 47 hours ago 848MB 2026-04-13 01:20:26.839573 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ee5f7cb9bdcc 47 hours ago 848MB 2026-04-13 01:20:26.839581 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 2d509010657e 47 hours ago 848MB 2026-04-13 01:20:26.839611 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 df1e056a42f9 47 hours ago 848MB 2026-04-13 01:20:26.839620 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 bc46bff56fc8 47 hours ago 997MB 2026-04-13 01:20:26.839628 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 32f86879d566 47 hours ago 992MB 2026-04-13 01:20:26.839637 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 aba37cbf5a3a 47 hours ago 997MB 2026-04-13 01:20:26.839659 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 9d0de4928149 47 hours ago 992MB 2026-04-13 01:20:26.839668 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 4aa498e7d17f 47 hours ago 992MB 2026-04-13 01:20:26.839686 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 757006639529 47 hours ago 991MB 2026-04-13 01:20:26.839695 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 13d7ea177ef7 47 hours ago 1.05GB 2026-04-13 01:20:26.839704 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 9f4b643f7ebb 47 hours ago 1.07GB 2026-04-13 01:20:26.839713 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 af3e22d4b8ee 47 hours ago 1.04GB 2026-04-13 01:20:26.839722 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c60dba3e519e 47 hours ago 983MB 2026-04-13 01:20:26.839730 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46009656346c 47 hours ago 1.38GB 2026-04-13 01:20:26.839755 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 08cf652880f2 47 hours ago 1.22GB 2026-04-13 01:20:26.839766 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 8ced8e39e0e2 47 hours ago 1.22GB 2026-04-13 01:20:26.839776 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 01b9d4777aad 47 hours ago 1.22GB 2026-04-13 01:20:26.839786 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 279248ef27f3 47 hours ago 999MB 2026-04-13 01:20:26.839797 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 aabb20fb09c2 47 hours ago 998MB 2026-04-13 01:20:26.839807 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 397971e07077 47 hours ago 999MB 2026-04-13 01:20:26.839816 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b634398cfda7 47 hours ago 1.14GB 2026-04-13 01:20:26.839826 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5f3220cb8117 47 hours ago 1.25GB 2026-04-13 01:20:26.839836 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c6bb595625ea 47 hours ago 1.17GB 2026-04-13 01:20:26.839846 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8bd3d0213ed2 47 hours ago 1.41GB 2026-04-13 01:20:26.839857 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 74f27783512a 47 hours ago 1.73GB 2026-04-13 01:20:26.839866 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 25b987de35dd 47 hours ago 1.42GB 2026-04-13 01:20:26.839876 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 649ad89d5991 47 hours ago 1.41GB 2026-04-13 01:20:26.839886 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 150d1918ab5c 47 hours ago 1.11GB 2026-04-13 01:20:26.839896 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 dfb350742fdd 47 hours ago 1.04GB 2026-04-13 01:20:26.839906 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c027424b9687 47 hours ago 1.04GB 2026-04-13 01:20:26.839915 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 6479ec65eecd 47 hours ago 1.06GB 2026-04-13 01:20:26.839925 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 90d329a4c7eb 47 hours ago 1.06GB 2026-04-13 01:20:26.839942 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 51041227b1ae 47 hours ago 1.04GB 2026-04-13 01:20:26.839956 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 fe1bc4818a44 2 days ago 1.54GB 2026-04-13 01:20:26.839967 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 4 days ago 1.56GB 2026-04-13 01:20:27.002605 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-13 01:20:27.002725 | orchestrator | ++ semver latest 5.0.0 2026-04-13 01:20:27.057754 | orchestrator | 2026-04-13 01:20:27.057842 | orchestrator | ## Containers @ testbed-node-2 2026-04-13 01:20:27.057859 | orchestrator | 2026-04-13 01:20:27.057877 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-13 01:20:27.057889 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 01:20:27.057899 | orchestrator | + echo 2026-04-13 01:20:27.057910 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-13 01:20:27.057920 | orchestrator | + echo 2026-04-13 01:20:27.057930 | orchestrator | + osism container testbed-node-2 ps 2026-04-13 01:20:28.586120 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-13 01:20:28.586244 | orchestrator | e048ab95c47b registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-04-13 01:20:28.586274 | orchestrator | 403c595e24ff registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-04-13 01:20:28.586294 | orchestrator | d37e90c2d58c registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-04-13 01:20:28.586332 | orchestrator | dbd54d9d6832 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-04-13 01:20:28.586354 | orchestrator | 2065f05d926d registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-13 01:20:28.586374 | orchestrator | 02867f094a7e registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-13 01:20:28.586494 | orchestrator | 65d545943587 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-04-13 01:20:28.586508 | orchestrator | b8a929aef9b9 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-04-13 01:20:28.586520 | orchestrator | dac8d9a263b9 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-04-13 01:20:28.586531 | orchestrator | 467feedfc449 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2026-04-13 01:20:28.586542 | orchestrator | 824b5730d94c registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2026-04-13 01:20:28.586555 | orchestrator | 361b1e68cf5c registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-13 01:20:28.586568 | orchestrator | 200b419fc429 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-04-13 01:20:28.586580 | orchestrator | fde81e8562dc registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-13 01:20:28.586616 | orchestrator | 7950c0f3027a registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-13 01:20:28.586628 | orchestrator | 0d33aeabbae8 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_central 2026-04-13 01:20:28.586639 | orchestrator | f2b6695a5d84 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2026-04-13 01:20:28.586650 | orchestrator | b7fbff00322b registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-04-13 01:20:28.586662 | orchestrator | cdef3c50e650 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2026-04-13 01:20:28.586673 | orchestrator | 53ddf652f215 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2026-04-13 01:20:28.586684 | orchestrator | aa07ebfa9279 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2026-04-13 01:20:28.586717 | orchestrator | 99c79f40a51f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-04-13 01:20:28.586729 | orchestrator | 849c0adc58d0 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-04-13 01:20:28.586741 | orchestrator | a37de6245ae1 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-13 01:20:28.586752 | orchestrator | e32867542758 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-13 01:20:28.586763 | orchestrator | 43c5975829b4 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2026-04-13 01:20:28.586775 | orchestrator | f6a0abc0d9bc registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2026-04-13 01:20:28.586786 | orchestrator | 5d4bf82fad61 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-04-13 01:20:28.586798 | orchestrator | 7d62e41ec80e registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-04-13 01:20:28.586835 | orchestrator | 08348c4260d0 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-13 01:20:28.586847 | orchestrator | 8c83a32a4f84 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-04-13 01:20:28.586859 | orchestrator | 2303e9996499 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2026-04-13 01:20:28.586870 | orchestrator | ac3ae61b6b6c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-13 01:20:28.586890 | orchestrator | 4e763a8c3e35 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2026-04-13 01:20:28.586902 | orchestrator | dbd9dba51070 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) keystone 2026-04-13 01:20:28.586913 | orchestrator | 4f548fe3e452 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-04-13 01:20:28.586924 | orchestrator | 79cb9dff983b registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-04-13 01:20:28.586940 | orchestrator | 8b1d0ae9f0b6 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-04-13 01:20:28.586950 | orchestrator | eb7548b7a5ed registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-04-13 01:20:28.586960 | orchestrator | fb938b0e1729 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 21 minutes (healthy) mariadb 2026-04-13 01:20:28.586970 | orchestrator | 807b07ad852f registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-04-13 01:20:28.586979 | orchestrator | b6a51b9ac343 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-04-13 01:20:28.586989 | orchestrator | 20a84ba8a2b5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2026-04-13 01:20:28.586999 | orchestrator | b2577b2ad2fa registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-04-13 01:20:28.587017 | orchestrator | 4d2665295975 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-13 01:20:28.587028 | orchestrator | 48d688902cfb registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-13 01:20:28.587037 | orchestrator | 5a037576eb15 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-04-13 01:20:28.587047 | orchestrator | 1ae18c58699a registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2026-04-13 01:20:28.587056 | orchestrator | 862735ecc249 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-04-13 01:20:28.587066 | orchestrator | 55a448ab0149 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-04-13 01:20:28.587075 | orchestrator | b4520027b10e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-04-13 01:20:28.587085 | orchestrator | d16ac5f97ef1 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-04-13 01:20:28.587095 | orchestrator | 108dba36d8b1 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-13 01:20:28.587111 | orchestrator | bb48aa0dfee0 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-13 01:20:28.587120 | orchestrator | 1e2360d804f2 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-13 01:20:28.587134 | orchestrator | 9e131bbd81a5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-04-13 01:20:28.587150 | orchestrator | ac3fa791997a registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-13 01:20:28.587165 | orchestrator | d2dade7a11a8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-13 01:20:28.587180 | orchestrator | f59f0e4e3ad3 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-13 01:20:28.742062 | orchestrator | 2026-04-13 01:20:28.742142 | orchestrator | ## Images @ testbed-node-2 2026-04-13 01:20:28.742153 | orchestrator | 2026-04-13 01:20:28.742162 | orchestrator | + echo 2026-04-13 01:20:28.742170 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-13 01:20:28.742178 | orchestrator | + echo 2026-04-13 01:20:28.742186 | orchestrator | + osism container testbed-node-2 images 2026-04-13 01:20:30.293104 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-13 01:20:30.293207 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 851c6241fc0f 21 hours ago 1.35GB 2026-04-13 01:20:30.293223 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 72b35c3a08d6 47 hours ago 587MB 2026-04-13 01:20:30.293234 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e80460bcbd2b 47 hours ago 675MB 2026-04-13 01:20:30.293245 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3502772e2bf5 47 hours ago 1.04GB 2026-04-13 01:20:30.293256 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 8cd075b4e71f 47 hours ago 330MB 2026-04-13 01:20:30.293267 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 bd480c64251d 47 hours ago 284MB 2026-04-13 01:20:30.293278 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 396ad59e4bd6 47 hours ago 419MB 2026-04-13 01:20:30.293288 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 36c40086ca8c 47 hours ago 274MB 2026-04-13 01:20:30.293299 | orchestrator | registry.osism.tech/kolla/cron 2024.2 ba2465d5505f 47 hours ago 273MB 2026-04-13 01:20:30.293310 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 88d5b15e75e6 47 hours ago 282MB 2026-04-13 01:20:30.293320 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6252cd5169df 47 hours ago 281MB 2026-04-13 01:20:30.293350 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9c8136b35f55 47 hours ago 280MB 2026-04-13 01:20:30.293362 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 cff1d041dbb4 47 hours ago 287MB 2026-04-13 01:20:30.293373 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 11025aea9b95 47 hours ago 287MB 2026-04-13 01:20:30.293384 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 cbd6f258faaa 47 hours ago 460MB 2026-04-13 01:20:30.293394 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ac5740c70814 47 hours ago 1.16GB 2026-04-13 01:20:30.293460 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 ee52a516281f 47 hours ago 365MB 2026-04-13 01:20:30.293472 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 6b04e3658031 47 hours ago 309MB 2026-04-13 01:20:30.293505 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 ae27083b15a5 47 hours ago 313MB 2026-04-13 01:20:30.293517 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 98d3b87ed02b 47 hours ago 306MB 2026-04-13 01:20:30.293528 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8c80eb88a699 47 hours ago 299MB 2026-04-13 01:20:30.293539 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 edef39569dfc 47 hours ago 848MB 2026-04-13 01:20:30.293549 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 ee5f7cb9bdcc 47 hours ago 848MB 2026-04-13 01:20:30.293560 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 2d509010657e 47 hours ago 848MB 2026-04-13 01:20:30.293571 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 df1e056a42f9 47 hours ago 848MB 2026-04-13 01:20:30.293582 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 bc46bff56fc8 47 hours ago 997MB 2026-04-13 01:20:30.293592 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 32f86879d566 47 hours ago 992MB 2026-04-13 01:20:30.293603 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 aba37cbf5a3a 47 hours ago 997MB 2026-04-13 01:20:30.293614 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 9d0de4928149 47 hours ago 992MB 2026-04-13 01:20:30.293626 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 4aa498e7d17f 47 hours ago 992MB 2026-04-13 01:20:30.293639 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 757006639529 47 hours ago 991MB 2026-04-13 01:20:30.293651 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 13d7ea177ef7 47 hours ago 1.05GB 2026-04-13 01:20:30.293664 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 9f4b643f7ebb 47 hours ago 1.07GB 2026-04-13 01:20:30.293677 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 af3e22d4b8ee 47 hours ago 1.04GB 2026-04-13 01:20:30.293690 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c60dba3e519e 47 hours ago 983MB 2026-04-13 01:20:30.293702 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46009656346c 47 hours ago 1.38GB 2026-04-13 01:20:30.293735 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 08cf652880f2 47 hours ago 1.22GB 2026-04-13 01:20:30.293748 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 8ced8e39e0e2 47 hours ago 1.22GB 2026-04-13 01:20:30.293767 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 01b9d4777aad 47 hours ago 1.22GB 2026-04-13 01:20:30.293780 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 279248ef27f3 47 hours ago 999MB 2026-04-13 01:20:30.293792 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 aabb20fb09c2 47 hours ago 998MB 2026-04-13 01:20:30.293805 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 397971e07077 47 hours ago 999MB 2026-04-13 01:20:30.293817 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 b634398cfda7 47 hours ago 1.14GB 2026-04-13 01:20:30.293829 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5f3220cb8117 47 hours ago 1.25GB 2026-04-13 01:20:30.293842 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c6bb595625ea 47 hours ago 1.17GB 2026-04-13 01:20:30.293854 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8bd3d0213ed2 47 hours ago 1.41GB 2026-04-13 01:20:30.293866 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 74f27783512a 47 hours ago 1.73GB 2026-04-13 01:20:30.293886 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 25b987de35dd 47 hours ago 1.42GB 2026-04-13 01:20:30.293900 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 649ad89d5991 47 hours ago 1.41GB 2026-04-13 01:20:30.293912 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 150d1918ab5c 47 hours ago 1.11GB 2026-04-13 01:20:30.293925 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 dfb350742fdd 47 hours ago 1.04GB 2026-04-13 01:20:30.293937 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c027424b9687 47 hours ago 1.04GB 2026-04-13 01:20:30.293950 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 6479ec65eecd 47 hours ago 1.06GB 2026-04-13 01:20:30.293962 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 90d329a4c7eb 47 hours ago 1.06GB 2026-04-13 01:20:30.293974 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 51041227b1ae 47 hours ago 1.04GB 2026-04-13 01:20:30.293987 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 fe1bc4818a44 2 days ago 1.54GB 2026-04-13 01:20:30.293999 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 4 days ago 1.56GB 2026-04-13 01:20:30.468482 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-13 01:20:30.475759 | orchestrator | + set -e 2026-04-13 01:20:30.475836 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 01:20:30.477304 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 01:20:30.477345 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 01:20:30.477352 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 01:20:30.477358 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 01:20:30.477365 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 01:20:30.477372 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 01:20:30.477379 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 01:20:30.477385 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 01:20:30.477392 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 01:20:30.477398 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 01:20:30.477442 | orchestrator | ++ export ARA=false 2026-04-13 01:20:30.477449 | orchestrator | ++ ARA=false 2026-04-13 01:20:30.477456 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 01:20:30.477462 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 01:20:30.477468 | orchestrator | ++ export TEMPEST=true 2026-04-13 01:20:30.477475 | orchestrator | ++ TEMPEST=true 2026-04-13 01:20:30.477481 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 01:20:30.477488 | orchestrator | ++ IS_ZUUL=true 2026-04-13 01:20:30.477494 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 01:20:30.477501 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 01:20:30.477508 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 01:20:30.477514 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 01:20:30.477520 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 01:20:30.477527 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 01:20:30.477533 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 01:20:30.477539 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 01:20:30.477546 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 01:20:30.477552 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 01:20:30.477559 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-13 01:20:30.477565 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-13 01:20:30.485995 | orchestrator | + set -e 2026-04-13 01:20:30.486495 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 01:20:30.486518 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 01:20:30.486523 | orchestrator | ++ INTERACTIVE=false 2026-04-13 01:20:30.486527 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 01:20:30.486532 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 01:20:30.486536 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-13 01:20:30.487043 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-13 01:20:30.492654 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 01:20:30.493052 | orchestrator | 2026-04-13 01:20:30.493098 | orchestrator | # Ceph status 2026-04-13 01:20:30.493106 | orchestrator | 2026-04-13 01:20:30.493112 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 01:20:30.493141 | orchestrator | + echo 2026-04-13 01:20:30.493148 | orchestrator | + echo '# Ceph status' 2026-04-13 01:20:30.493155 | orchestrator | + echo 2026-04-13 01:20:30.493161 | orchestrator | + ceph -s 2026-04-13 01:20:31.127660 | orchestrator | cluster: 2026-04-13 01:20:31.127764 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-13 01:20:31.127781 | orchestrator | health: HEALTH_OK 2026-04-13 01:20:31.127795 | orchestrator | 2026-04-13 01:20:31.127808 | orchestrator | services: 2026-04-13 01:20:31.127821 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-04-13 01:20:31.127835 | orchestrator | mgr: testbed-node-2(active, since 17m), standbys: testbed-node-1, testbed-node-0 2026-04-13 01:20:31.127847 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-13 01:20:31.127860 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2026-04-13 01:20:31.127871 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-13 01:20:31.127884 | orchestrator | 2026-04-13 01:20:31.127896 | orchestrator | data: 2026-04-13 01:20:31.127908 | orchestrator | volumes: 1/1 healthy 2026-04-13 01:20:31.127920 | orchestrator | pools: 14 pools, 401 pgs 2026-04-13 01:20:31.127931 | orchestrator | objects: 555 objects, 2.2 GiB 2026-04-13 01:20:31.127942 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-13 01:20:31.127953 | orchestrator | pgs: 401 active+clean 2026-04-13 01:20:31.127964 | orchestrator | 2026-04-13 01:20:31.172700 | orchestrator | 2026-04-13 01:20:31.172816 | orchestrator | # Ceph versions 2026-04-13 01:20:31.172843 | orchestrator | 2026-04-13 01:20:31.172863 | orchestrator | + echo 2026-04-13 01:20:31.172880 | orchestrator | + echo '# Ceph versions' 2026-04-13 01:20:31.172902 | orchestrator | + echo 2026-04-13 01:20:31.172919 | orchestrator | + ceph versions 2026-04-13 01:20:31.789954 | orchestrator | { 2026-04-13 01:20:31.790176 | orchestrator | "mon": { 2026-04-13 01:20:31.790207 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-13 01:20:31.790229 | orchestrator | }, 2026-04-13 01:20:31.790248 | orchestrator | "mgr": { 2026-04-13 01:20:31.790267 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-13 01:20:31.790284 | orchestrator | }, 2026-04-13 01:20:31.790301 | orchestrator | "osd": { 2026-04-13 01:20:31.790316 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-13 01:20:31.790334 | orchestrator | }, 2026-04-13 01:20:31.790350 | orchestrator | "mds": { 2026-04-13 01:20:31.790368 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-13 01:20:31.790384 | orchestrator | }, 2026-04-13 01:20:31.790480 | orchestrator | "rgw": { 2026-04-13 01:20:31.790533 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-13 01:20:31.790551 | orchestrator | }, 2026-04-13 01:20:31.790567 | orchestrator | "overall": { 2026-04-13 01:20:31.790585 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-13 01:20:31.790603 | orchestrator | } 2026-04-13 01:20:31.790620 | orchestrator | } 2026-04-13 01:20:31.844696 | orchestrator | 2026-04-13 01:20:31.845814 | orchestrator | # Ceph OSD tree 2026-04-13 01:20:31.845881 | orchestrator | 2026-04-13 01:20:31.845904 | orchestrator | + echo 2026-04-13 01:20:31.845921 | orchestrator | + echo '# Ceph OSD tree' 2026-04-13 01:20:31.845939 | orchestrator | + echo 2026-04-13 01:20:31.845956 | orchestrator | + ceph osd df tree 2026-04-13 01:20:32.390228 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-13 01:20:32.390337 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 390 MiB 113 GiB 5.88 1.00 - root default 2026-04-13 01:20:32.390354 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 1.00 - host testbed-node-3 2026-04-13 01:20:32.390365 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.24 1.23 199 up osd.2 2026-04-13 01:20:32.390376 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 918 MiB 866 MiB 1 KiB 52 MiB 19 GiB 4.49 0.76 189 up osd.4 2026-04-13 01:20:32.390387 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 126 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-04-13 01:20:32.390397 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1022 MiB 970 MiB 1 KiB 52 MiB 19 GiB 5.00 0.85 189 up osd.0 2026-04-13 01:20:32.390477 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.74 1.15 201 up osd.3 2026-04-13 01:20:32.390491 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.01 - host testbed-node-5 2026-04-13 01:20:32.390521 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.44 0.92 192 up osd.1 2026-04-13 01:20:32.390532 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.39 1.09 200 up osd.5 2026-04-13 01:20:32.390543 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 390 MiB 113 GiB 5.88 2026-04-13 01:20:32.390554 | orchestrator | MIN/MAX VAR: 0.76/1.23 STDDEV: 0.98 2026-04-13 01:20:32.447592 | orchestrator | 2026-04-13 01:20:32.447699 | orchestrator | # Ceph monitor status 2026-04-13 01:20:32.447726 | orchestrator | 2026-04-13 01:20:32.447747 | orchestrator | + echo 2026-04-13 01:20:32.447763 | orchestrator | + echo '# Ceph monitor status' 2026-04-13 01:20:32.447781 | orchestrator | + echo 2026-04-13 01:20:32.447799 | orchestrator | + ceph mon stat 2026-04-13 01:20:33.037123 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-13 01:20:33.082650 | orchestrator | 2026-04-13 01:20:33.082740 | orchestrator | # Ceph quorum status 2026-04-13 01:20:33.082756 | orchestrator | 2026-04-13 01:20:33.082769 | orchestrator | + echo 2026-04-13 01:20:33.082781 | orchestrator | + echo '# Ceph quorum status' 2026-04-13 01:20:33.082793 | orchestrator | + echo 2026-04-13 01:20:33.083960 | orchestrator | + ceph quorum_status 2026-04-13 01:20:33.083986 | orchestrator | + jq 2026-04-13 01:20:33.698591 | orchestrator | { 2026-04-13 01:20:33.698869 | orchestrator | "election_epoch": 8, 2026-04-13 01:20:33.698896 | orchestrator | "quorum": [ 2026-04-13 01:20:33.698909 | orchestrator | 0, 2026-04-13 01:20:33.698920 | orchestrator | 1, 2026-04-13 01:20:33.698931 | orchestrator | 2 2026-04-13 01:20:33.698941 | orchestrator | ], 2026-04-13 01:20:33.698952 | orchestrator | "quorum_names": [ 2026-04-13 01:20:33.698963 | orchestrator | "testbed-node-0", 2026-04-13 01:20:33.698974 | orchestrator | "testbed-node-1", 2026-04-13 01:20:33.698985 | orchestrator | "testbed-node-2" 2026-04-13 01:20:33.698996 | orchestrator | ], 2026-04-13 01:20:33.699007 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-13 01:20:33.699019 | orchestrator | "quorum_age": 1723, 2026-04-13 01:20:33.699030 | orchestrator | "features": { 2026-04-13 01:20:33.699041 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-13 01:20:33.699052 | orchestrator | "quorum_mon": [ 2026-04-13 01:20:33.699063 | orchestrator | "kraken", 2026-04-13 01:20:33.699073 | orchestrator | "luminous", 2026-04-13 01:20:33.699084 | orchestrator | "mimic", 2026-04-13 01:20:33.699095 | orchestrator | "osdmap-prune", 2026-04-13 01:20:33.699106 | orchestrator | "nautilus", 2026-04-13 01:20:33.699116 | orchestrator | "octopus", 2026-04-13 01:20:33.699127 | orchestrator | "pacific", 2026-04-13 01:20:33.699137 | orchestrator | "elector-pinging", 2026-04-13 01:20:33.699148 | orchestrator | "quincy", 2026-04-13 01:20:33.699158 | orchestrator | "reef" 2026-04-13 01:20:33.699169 | orchestrator | ] 2026-04-13 01:20:33.699180 | orchestrator | }, 2026-04-13 01:20:33.699191 | orchestrator | "monmap": { 2026-04-13 01:20:33.699202 | orchestrator | "epoch": 1, 2026-04-13 01:20:33.699212 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-13 01:20:33.699224 | orchestrator | "modified": "2026-04-13T00:51:32.193023Z", 2026-04-13 01:20:33.699236 | orchestrator | "created": "2026-04-13T00:51:32.193023Z", 2026-04-13 01:20:33.699246 | orchestrator | "min_mon_release": 18, 2026-04-13 01:20:33.699257 | orchestrator | "min_mon_release_name": "reef", 2026-04-13 01:20:33.699268 | orchestrator | "election_strategy": 1, 2026-04-13 01:20:33.699279 | orchestrator | "disallowed_leaders": "", 2026-04-13 01:20:33.699289 | orchestrator | "stretch_mode": false, 2026-04-13 01:20:33.699300 | orchestrator | "tiebreaker_mon": "", 2026-04-13 01:20:33.699310 | orchestrator | "removed_ranks": "", 2026-04-13 01:20:33.699321 | orchestrator | "features": { 2026-04-13 01:20:33.699332 | orchestrator | "persistent": [ 2026-04-13 01:20:33.699369 | orchestrator | "kraken", 2026-04-13 01:20:33.699380 | orchestrator | "luminous", 2026-04-13 01:20:33.699391 | orchestrator | "mimic", 2026-04-13 01:20:33.699428 | orchestrator | "osdmap-prune", 2026-04-13 01:20:33.699440 | orchestrator | "nautilus", 2026-04-13 01:20:33.699451 | orchestrator | "octopus", 2026-04-13 01:20:33.699462 | orchestrator | "pacific", 2026-04-13 01:20:33.699472 | orchestrator | "elector-pinging", 2026-04-13 01:20:33.699483 | orchestrator | "quincy", 2026-04-13 01:20:33.699494 | orchestrator | "reef" 2026-04-13 01:20:33.699504 | orchestrator | ], 2026-04-13 01:20:33.699515 | orchestrator | "optional": [] 2026-04-13 01:20:33.699526 | orchestrator | }, 2026-04-13 01:20:33.699536 | orchestrator | "mons": [ 2026-04-13 01:20:33.699547 | orchestrator | { 2026-04-13 01:20:33.699558 | orchestrator | "rank": 0, 2026-04-13 01:20:33.699584 | orchestrator | "name": "testbed-node-0", 2026-04-13 01:20:33.699595 | orchestrator | "public_addrs": { 2026-04-13 01:20:33.699606 | orchestrator | "addrvec": [ 2026-04-13 01:20:33.699617 | orchestrator | { 2026-04-13 01:20:33.699628 | orchestrator | "type": "v2", 2026-04-13 01:20:33.699638 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-13 01:20:33.699649 | orchestrator | "nonce": 0 2026-04-13 01:20:33.699660 | orchestrator | }, 2026-04-13 01:20:33.699671 | orchestrator | { 2026-04-13 01:20:33.699682 | orchestrator | "type": "v1", 2026-04-13 01:20:33.699692 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-13 01:20:33.699703 | orchestrator | "nonce": 0 2026-04-13 01:20:33.699714 | orchestrator | } 2026-04-13 01:20:33.699724 | orchestrator | ] 2026-04-13 01:20:33.699735 | orchestrator | }, 2026-04-13 01:20:33.699746 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-13 01:20:33.699757 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-13 01:20:33.699768 | orchestrator | "priority": 0, 2026-04-13 01:20:33.699779 | orchestrator | "weight": 0, 2026-04-13 01:20:33.699790 | orchestrator | "crush_location": "{}" 2026-04-13 01:20:33.699800 | orchestrator | }, 2026-04-13 01:20:33.699811 | orchestrator | { 2026-04-13 01:20:33.699822 | orchestrator | "rank": 1, 2026-04-13 01:20:33.699832 | orchestrator | "name": "testbed-node-1", 2026-04-13 01:20:33.699843 | orchestrator | "public_addrs": { 2026-04-13 01:20:33.699854 | orchestrator | "addrvec": [ 2026-04-13 01:20:33.699864 | orchestrator | { 2026-04-13 01:20:33.699875 | orchestrator | "type": "v2", 2026-04-13 01:20:33.699886 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-13 01:20:33.699897 | orchestrator | "nonce": 0 2026-04-13 01:20:33.699908 | orchestrator | }, 2026-04-13 01:20:33.699918 | orchestrator | { 2026-04-13 01:20:33.699929 | orchestrator | "type": "v1", 2026-04-13 01:20:33.699940 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-13 01:20:33.699951 | orchestrator | "nonce": 0 2026-04-13 01:20:33.699961 | orchestrator | } 2026-04-13 01:20:33.699972 | orchestrator | ] 2026-04-13 01:20:33.699983 | orchestrator | }, 2026-04-13 01:20:33.699994 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-13 01:20:33.700005 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-13 01:20:33.700016 | orchestrator | "priority": 0, 2026-04-13 01:20:33.700026 | orchestrator | "weight": 0, 2026-04-13 01:20:33.700037 | orchestrator | "crush_location": "{}" 2026-04-13 01:20:33.700048 | orchestrator | }, 2026-04-13 01:20:33.700059 | orchestrator | { 2026-04-13 01:20:33.700069 | orchestrator | "rank": 2, 2026-04-13 01:20:33.700080 | orchestrator | "name": "testbed-node-2", 2026-04-13 01:20:33.700091 | orchestrator | "public_addrs": { 2026-04-13 01:20:33.700102 | orchestrator | "addrvec": [ 2026-04-13 01:20:33.700112 | orchestrator | { 2026-04-13 01:20:33.700123 | orchestrator | "type": "v2", 2026-04-13 01:20:33.700134 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-13 01:20:33.700145 | orchestrator | "nonce": 0 2026-04-13 01:20:33.700156 | orchestrator | }, 2026-04-13 01:20:33.700166 | orchestrator | { 2026-04-13 01:20:33.700177 | orchestrator | "type": "v1", 2026-04-13 01:20:33.700188 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-13 01:20:33.700198 | orchestrator | "nonce": 0 2026-04-13 01:20:33.700209 | orchestrator | } 2026-04-13 01:20:33.700220 | orchestrator | ] 2026-04-13 01:20:33.700230 | orchestrator | }, 2026-04-13 01:20:33.700241 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-13 01:20:33.700252 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-13 01:20:33.700272 | orchestrator | "priority": 0, 2026-04-13 01:20:33.700283 | orchestrator | "weight": 0, 2026-04-13 01:20:33.700293 | orchestrator | "crush_location": "{}" 2026-04-13 01:20:33.700304 | orchestrator | } 2026-04-13 01:20:33.700315 | orchestrator | ] 2026-04-13 01:20:33.700326 | orchestrator | } 2026-04-13 01:20:33.700336 | orchestrator | } 2026-04-13 01:20:33.700532 | orchestrator | 2026-04-13 01:20:33.700552 | orchestrator | # Ceph free space status 2026-04-13 01:20:33.700563 | orchestrator | 2026-04-13 01:20:33.700574 | orchestrator | + echo 2026-04-13 01:20:33.700585 | orchestrator | + echo '# Ceph free space status' 2026-04-13 01:20:33.700596 | orchestrator | + echo 2026-04-13 01:20:33.700606 | orchestrator | + ceph df 2026-04-13 01:20:34.320705 | orchestrator | --- RAW STORAGE --- 2026-04-13 01:20:34.320804 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-13 01:20:34.320831 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-04-13 01:20:34.320843 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-04-13 01:20:34.320855 | orchestrator | 2026-04-13 01:20:34.320867 | orchestrator | --- POOLS --- 2026-04-13 01:20:34.320887 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-13 01:20:34.320907 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-13 01:20:34.320924 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-13 01:20:34.320943 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-13 01:20:34.320962 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-13 01:20:34.320977 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-13 01:20:34.320989 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-13 01:20:34.321000 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-13 01:20:34.321011 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-13 01:20:34.321022 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-04-13 01:20:34.321033 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-13 01:20:34.321043 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-13 01:20:34.321054 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-04-13 01:20:34.321065 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-13 01:20:34.321076 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-13 01:20:34.368266 | orchestrator | ++ semver latest 5.0.0 2026-04-13 01:20:34.423031 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-13 01:20:34.423123 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-13 01:20:34.423138 | orchestrator | + osism apply facts 2026-04-13 01:20:35.807853 | orchestrator | 2026-04-13 01:20:35 | INFO  | Prepare task for execution of facts. 2026-04-13 01:20:35.882084 | orchestrator | 2026-04-13 01:20:35 | INFO  | Task 4824c1c5-07ee-4533-b960-70f8946b75c1 (facts) was prepared for execution. 2026-04-13 01:20:35.882188 | orchestrator | 2026-04-13 01:20:35 | INFO  | It takes a moment until task 4824c1c5-07ee-4533-b960-70f8946b75c1 (facts) has been started and output is visible here. 2026-04-13 01:20:49.660125 | orchestrator | 2026-04-13 01:20:49.660223 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-13 01:20:49.660237 | orchestrator | 2026-04-13 01:20:49.660248 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-13 01:20:49.660258 | orchestrator | Monday 13 April 2026 01:20:39 +0000 (0:00:00.405) 0:00:00.405 ********** 2026-04-13 01:20:49.660268 | orchestrator | ok: [testbed-manager] 2026-04-13 01:20:49.660279 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:20:49.660289 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:20:49.660298 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:20:49.660308 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:20:49.660318 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:20:49.660354 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:20:49.660364 | orchestrator | 2026-04-13 01:20:49.660374 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-13 01:20:49.660385 | orchestrator | Monday 13 April 2026 01:20:40 +0000 (0:00:01.415) 0:00:01.820 ********** 2026-04-13 01:20:49.660442 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:20:49.660455 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:20:49.660465 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:20:49.660474 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:20:49.660485 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:20:49.660494 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:20:49.660504 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:20:49.660513 | orchestrator | 2026-04-13 01:20:49.660523 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 01:20:49.660532 | orchestrator | 2026-04-13 01:20:49.660542 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 01:20:49.660552 | orchestrator | Monday 13 April 2026 01:20:42 +0000 (0:00:01.392) 0:00:03.213 ********** 2026-04-13 01:20:49.660561 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:20:49.660571 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:20:49.660598 | orchestrator | ok: [testbed-manager] 2026-04-13 01:20:49.660608 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:20:49.660617 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:20:49.660627 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:20:49.660636 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:20:49.660646 | orchestrator | 2026-04-13 01:20:49.660655 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-13 01:20:49.660665 | orchestrator | 2026-04-13 01:20:49.660676 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-13 01:20:49.660688 | orchestrator | Monday 13 April 2026 01:20:48 +0000 (0:00:06.321) 0:00:09.534 ********** 2026-04-13 01:20:49.660699 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:20:49.660709 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:20:49.660721 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:20:49.660731 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:20:49.660742 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:20:49.660752 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:20:49.660763 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:20:49.660774 | orchestrator | 2026-04-13 01:20:49.660785 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:20:49.660797 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:20:49.660809 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:20:49.660820 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:20:49.660831 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:20:49.660842 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:20:49.660853 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:20:49.660862 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:20:49.660872 | orchestrator | 2026-04-13 01:20:49.660881 | orchestrator | 2026-04-13 01:20:49.660891 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:20:49.660900 | orchestrator | Monday 13 April 2026 01:20:49 +0000 (0:00:00.770) 0:00:10.305 ********** 2026-04-13 01:20:49.660957 | orchestrator | =============================================================================== 2026-04-13 01:20:49.660967 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.32s 2026-04-13 01:20:49.660977 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.42s 2026-04-13 01:20:49.660986 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-04-13 01:20:49.660996 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.77s 2026-04-13 01:20:49.860314 | orchestrator | + osism validate ceph-mons 2026-04-13 01:21:21.543172 | orchestrator | 2026-04-13 01:21:21.543287 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-13 01:21:21.543303 | orchestrator | 2026-04-13 01:21:21.543315 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-13 01:21:21.543326 | orchestrator | Monday 13 April 2026 01:21:05 +0000 (0:00:00.553) 0:00:00.553 ********** 2026-04-13 01:21:21.543338 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:21.543349 | orchestrator | 2026-04-13 01:21:21.543361 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-13 01:21:21.543372 | orchestrator | Monday 13 April 2026 01:21:06 +0000 (0:00:01.095) 0:00:01.649 ********** 2026-04-13 01:21:21.543384 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:21.543462 | orchestrator | 2026-04-13 01:21:21.543473 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-13 01:21:21.543485 | orchestrator | Monday 13 April 2026 01:21:06 +0000 (0:00:00.714) 0:00:02.363 ********** 2026-04-13 01:21:21.543496 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.543508 | orchestrator | 2026-04-13 01:21:21.543519 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-13 01:21:21.543530 | orchestrator | Monday 13 April 2026 01:21:07 +0000 (0:00:00.127) 0:00:02.490 ********** 2026-04-13 01:21:21.543541 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.543569 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:21.543580 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:21.543592 | orchestrator | 2026-04-13 01:21:21.543603 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-13 01:21:21.543614 | orchestrator | Monday 13 April 2026 01:21:07 +0000 (0:00:00.278) 0:00:02.768 ********** 2026-04-13 01:21:21.543625 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:21.543636 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.543647 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:21.543658 | orchestrator | 2026-04-13 01:21:21.543669 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-13 01:21:21.543680 | orchestrator | Monday 13 April 2026 01:21:08 +0000 (0:00:01.562) 0:00:04.331 ********** 2026-04-13 01:21:21.543691 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.543703 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:21:21.543717 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:21:21.543729 | orchestrator | 2026-04-13 01:21:21.543743 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-13 01:21:21.543756 | orchestrator | Monday 13 April 2026 01:21:09 +0000 (0:00:00.317) 0:00:04.649 ********** 2026-04-13 01:21:21.543769 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.543782 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:21.543794 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:21.543807 | orchestrator | 2026-04-13 01:21:21.543820 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-13 01:21:21.543910 | orchestrator | Monday 13 April 2026 01:21:09 +0000 (0:00:00.318) 0:00:04.967 ********** 2026-04-13 01:21:21.543924 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.543938 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:21.543951 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:21.543964 | orchestrator | 2026-04-13 01:21:21.543977 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-13 01:21:21.544010 | orchestrator | Monday 13 April 2026 01:21:09 +0000 (0:00:00.321) 0:00:05.289 ********** 2026-04-13 01:21:21.544023 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544036 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:21:21.544048 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:21:21.544059 | orchestrator | 2026-04-13 01:21:21.544070 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-13 01:21:21.544081 | orchestrator | Monday 13 April 2026 01:21:10 +0000 (0:00:00.509) 0:00:05.799 ********** 2026-04-13 01:21:21.544092 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544103 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:21.544114 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:21.544125 | orchestrator | 2026-04-13 01:21:21.544136 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-13 01:21:21.544147 | orchestrator | Monday 13 April 2026 01:21:10 +0000 (0:00:00.337) 0:00:06.136 ********** 2026-04-13 01:21:21.544158 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544169 | orchestrator | 2026-04-13 01:21:21.544180 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-13 01:21:21.544191 | orchestrator | Monday 13 April 2026 01:21:10 +0000 (0:00:00.252) 0:00:06.388 ********** 2026-04-13 01:21:21.544202 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544213 | orchestrator | 2026-04-13 01:21:21.544225 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-13 01:21:21.544236 | orchestrator | Monday 13 April 2026 01:21:11 +0000 (0:00:00.254) 0:00:06.642 ********** 2026-04-13 01:21:21.544247 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544258 | orchestrator | 2026-04-13 01:21:21.544269 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:21.544280 | orchestrator | Monday 13 April 2026 01:21:11 +0000 (0:00:00.247) 0:00:06.890 ********** 2026-04-13 01:21:21.544291 | orchestrator | 2026-04-13 01:21:21.544302 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:21.544313 | orchestrator | Monday 13 April 2026 01:21:11 +0000 (0:00:00.069) 0:00:06.959 ********** 2026-04-13 01:21:21.544324 | orchestrator | 2026-04-13 01:21:21.544335 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:21.544346 | orchestrator | Monday 13 April 2026 01:21:11 +0000 (0:00:00.072) 0:00:07.031 ********** 2026-04-13 01:21:21.544357 | orchestrator | 2026-04-13 01:21:21.544368 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-13 01:21:21.544379 | orchestrator | Monday 13 April 2026 01:21:11 +0000 (0:00:00.251) 0:00:07.283 ********** 2026-04-13 01:21:21.544408 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544419 | orchestrator | 2026-04-13 01:21:21.544430 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-13 01:21:21.544441 | orchestrator | Monday 13 April 2026 01:21:12 +0000 (0:00:00.278) 0:00:07.561 ********** 2026-04-13 01:21:21.544452 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544463 | orchestrator | 2026-04-13 01:21:21.544492 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-13 01:21:21.544504 | orchestrator | Monday 13 April 2026 01:21:12 +0000 (0:00:00.271) 0:00:07.833 ********** 2026-04-13 01:21:21.544515 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544526 | orchestrator | 2026-04-13 01:21:21.544537 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-13 01:21:21.544548 | orchestrator | Monday 13 April 2026 01:21:12 +0000 (0:00:00.133) 0:00:07.966 ********** 2026-04-13 01:21:21.544559 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:21:21.544570 | orchestrator | 2026-04-13 01:21:21.544581 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-13 01:21:21.544592 | orchestrator | Monday 13 April 2026 01:21:14 +0000 (0:00:01.666) 0:00:09.632 ********** 2026-04-13 01:21:21.544603 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544614 | orchestrator | 2026-04-13 01:21:21.544625 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-13 01:21:21.544644 | orchestrator | Monday 13 April 2026 01:21:14 +0000 (0:00:00.343) 0:00:09.976 ********** 2026-04-13 01:21:21.544655 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544666 | orchestrator | 2026-04-13 01:21:21.544677 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-13 01:21:21.544688 | orchestrator | Monday 13 April 2026 01:21:14 +0000 (0:00:00.125) 0:00:10.101 ********** 2026-04-13 01:21:21.544699 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544710 | orchestrator | 2026-04-13 01:21:21.544721 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-13 01:21:21.544732 | orchestrator | Monday 13 April 2026 01:21:14 +0000 (0:00:00.309) 0:00:10.410 ********** 2026-04-13 01:21:21.544743 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544754 | orchestrator | 2026-04-13 01:21:21.544765 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-13 01:21:21.544776 | orchestrator | Monday 13 April 2026 01:21:15 +0000 (0:00:00.325) 0:00:10.736 ********** 2026-04-13 01:21:21.544787 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.544798 | orchestrator | 2026-04-13 01:21:21.544809 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-13 01:21:21.544820 | orchestrator | Monday 13 April 2026 01:21:15 +0000 (0:00:00.133) 0:00:10.869 ********** 2026-04-13 01:21:21.544831 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544842 | orchestrator | 2026-04-13 01:21:21.544853 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-13 01:21:21.544864 | orchestrator | Monday 13 April 2026 01:21:15 +0000 (0:00:00.134) 0:00:11.004 ********** 2026-04-13 01:21:21.544875 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544886 | orchestrator | 2026-04-13 01:21:21.544897 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-13 01:21:21.544908 | orchestrator | Monday 13 April 2026 01:21:15 +0000 (0:00:00.324) 0:00:11.329 ********** 2026-04-13 01:21:21.544919 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:21:21.544930 | orchestrator | 2026-04-13 01:21:21.544941 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-13 01:21:21.544952 | orchestrator | Monday 13 April 2026 01:21:17 +0000 (0:00:01.351) 0:00:12.680 ********** 2026-04-13 01:21:21.544963 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.544974 | orchestrator | 2026-04-13 01:21:21.544985 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-13 01:21:21.544996 | orchestrator | Monday 13 April 2026 01:21:17 +0000 (0:00:00.334) 0:00:13.015 ********** 2026-04-13 01:21:21.545006 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.545017 | orchestrator | 2026-04-13 01:21:21.545028 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-13 01:21:21.545039 | orchestrator | Monday 13 April 2026 01:21:17 +0000 (0:00:00.144) 0:00:13.159 ********** 2026-04-13 01:21:21.545050 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:21.545061 | orchestrator | 2026-04-13 01:21:21.545072 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-13 01:21:21.545083 | orchestrator | Monday 13 April 2026 01:21:17 +0000 (0:00:00.149) 0:00:13.309 ********** 2026-04-13 01:21:21.545094 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.545105 | orchestrator | 2026-04-13 01:21:21.545116 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-13 01:21:21.545127 | orchestrator | Monday 13 April 2026 01:21:18 +0000 (0:00:00.168) 0:00:13.478 ********** 2026-04-13 01:21:21.545137 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.545148 | orchestrator | 2026-04-13 01:21:21.545169 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-13 01:21:21.545180 | orchestrator | Monday 13 April 2026 01:21:18 +0000 (0:00:00.126) 0:00:13.604 ********** 2026-04-13 01:21:21.545191 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:21.545203 | orchestrator | 2026-04-13 01:21:21.545214 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-13 01:21:21.545229 | orchestrator | Monday 13 April 2026 01:21:18 +0000 (0:00:00.274) 0:00:13.878 ********** 2026-04-13 01:21:21.545240 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:21.545251 | orchestrator | 2026-04-13 01:21:21.545267 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-13 01:21:21.545279 | orchestrator | Monday 13 April 2026 01:21:18 +0000 (0:00:00.248) 0:00:14.126 ********** 2026-04-13 01:21:21.545290 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:21.545301 | orchestrator | 2026-04-13 01:21:21.545312 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-13 01:21:21.545322 | orchestrator | Monday 13 April 2026 01:21:20 +0000 (0:00:01.870) 0:00:15.996 ********** 2026-04-13 01:21:21.545333 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:21.545344 | orchestrator | 2026-04-13 01:21:21.545355 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-13 01:21:21.545366 | orchestrator | Monday 13 April 2026 01:21:20 +0000 (0:00:00.273) 0:00:16.270 ********** 2026-04-13 01:21:21.545377 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:21.545413 | orchestrator | 2026-04-13 01:21:21.545432 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:23.950930 | orchestrator | Monday 13 April 2026 01:21:21 +0000 (0:00:00.675) 0:00:16.946 ********** 2026-04-13 01:21:23.951032 | orchestrator | 2026-04-13 01:21:23.951048 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:23.951060 | orchestrator | Monday 13 April 2026 01:21:21 +0000 (0:00:00.088) 0:00:17.034 ********** 2026-04-13 01:21:23.951072 | orchestrator | 2026-04-13 01:21:23.951083 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:23.951094 | orchestrator | Monday 13 April 2026 01:21:21 +0000 (0:00:00.128) 0:00:17.163 ********** 2026-04-13 01:21:23.951104 | orchestrator | 2026-04-13 01:21:23.951115 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-13 01:21:23.951126 | orchestrator | Monday 13 April 2026 01:21:21 +0000 (0:00:00.115) 0:00:17.278 ********** 2026-04-13 01:21:23.951137 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:23.951148 | orchestrator | 2026-04-13 01:21:23.951159 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-13 01:21:23.951170 | orchestrator | Monday 13 April 2026 01:21:23 +0000 (0:00:01.354) 0:00:18.633 ********** 2026-04-13 01:21:23.951181 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-13 01:21:23.951192 | orchestrator |  "msg": [ 2026-04-13 01:21:23.951223 | orchestrator |  "Validator run completed.", 2026-04-13 01:21:23.951235 | orchestrator |  "You can find the report file here:", 2026-04-13 01:21:23.951246 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-13T01:21:06+00:00-report.json", 2026-04-13 01:21:23.951258 | orchestrator |  "on the following host:", 2026-04-13 01:21:23.951269 | orchestrator |  "testbed-manager" 2026-04-13 01:21:23.951280 | orchestrator |  ] 2026-04-13 01:21:23.951291 | orchestrator | } 2026-04-13 01:21:23.951302 | orchestrator | 2026-04-13 01:21:23.951313 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:21:23.951325 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-13 01:21:23.951337 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:21:23.951348 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:21:23.951359 | orchestrator | 2026-04-13 01:21:23.951370 | orchestrator | 2026-04-13 01:21:23.951430 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:21:23.951443 | orchestrator | Monday 13 April 2026 01:21:23 +0000 (0:00:00.380) 0:00:19.013 ********** 2026-04-13 01:21:23.951453 | orchestrator | =============================================================================== 2026-04-13 01:21:23.951465 | orchestrator | Aggregate test results step one ----------------------------------------- 1.87s 2026-04-13 01:21:23.951478 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.67s 2026-04-13 01:21:23.951495 | orchestrator | Get container info ------------------------------------------------------ 1.56s 2026-04-13 01:21:23.951514 | orchestrator | Write report file ------------------------------------------------------- 1.35s 2026-04-13 01:21:23.951536 | orchestrator | Gather status data ------------------------------------------------------ 1.35s 2026-04-13 01:21:23.951556 | orchestrator | Get timestamp for report file ------------------------------------------- 1.10s 2026-04-13 01:21:23.951576 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-13 01:21:23.951590 | orchestrator | Aggregate test results step three --------------------------------------- 0.68s 2026-04-13 01:21:23.951603 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.51s 2026-04-13 01:21:23.951616 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-04-13 01:21:23.951628 | orchestrator | Print report file information ------------------------------------------- 0.38s 2026-04-13 01:21:23.951641 | orchestrator | Set quorum test data ---------------------------------------------------- 0.34s 2026-04-13 01:21:23.951653 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.34s 2026-04-13 01:21:23.951665 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2026-04-13 01:21:23.951678 | orchestrator | Flush handlers ---------------------------------------------------------- 0.33s 2026-04-13 01:21:23.951691 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2026-04-13 01:21:23.951704 | orchestrator | Prepare status test vars ------------------------------------------------ 0.32s 2026-04-13 01:21:23.951716 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-04-13 01:21:23.951729 | orchestrator | Set test result to passed if container is existing ---------------------- 0.32s 2026-04-13 01:21:23.951742 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-04-13 01:21:24.188252 | orchestrator | + osism validate ceph-mgrs 2026-04-13 01:21:54.100516 | orchestrator | 2026-04-13 01:21:54.100646 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-13 01:21:54.100662 | orchestrator | 2026-04-13 01:21:54.100674 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-13 01:21:54.100684 | orchestrator | Monday 13 April 2026 01:21:39 +0000 (0:00:00.568) 0:00:00.568 ********** 2026-04-13 01:21:54.100695 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:54.100705 | orchestrator | 2026-04-13 01:21:54.100715 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-13 01:21:54.100725 | orchestrator | Monday 13 April 2026 01:21:40 +0000 (0:00:01.056) 0:00:01.624 ********** 2026-04-13 01:21:54.100734 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:54.100744 | orchestrator | 2026-04-13 01:21:54.100754 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-13 01:21:54.100764 | orchestrator | Monday 13 April 2026 01:21:41 +0000 (0:00:00.736) 0:00:02.361 ********** 2026-04-13 01:21:54.100774 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.100786 | orchestrator | 2026-04-13 01:21:54.100795 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-13 01:21:54.100805 | orchestrator | Monday 13 April 2026 01:21:41 +0000 (0:00:00.116) 0:00:02.477 ********** 2026-04-13 01:21:54.100815 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.100824 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:54.100834 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:54.100862 | orchestrator | 2026-04-13 01:21:54.100872 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-13 01:21:54.100882 | orchestrator | Monday 13 April 2026 01:21:41 +0000 (0:00:00.307) 0:00:02.785 ********** 2026-04-13 01:21:54.100891 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.100901 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:54.100910 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:54.100920 | orchestrator | 2026-04-13 01:21:54.100929 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-13 01:21:54.100947 | orchestrator | Monday 13 April 2026 01:21:43 +0000 (0:00:01.554) 0:00:04.339 ********** 2026-04-13 01:21:54.100957 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.100966 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:21:54.100976 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:21:54.100986 | orchestrator | 2026-04-13 01:21:54.100995 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-13 01:21:54.101005 | orchestrator | Monday 13 April 2026 01:21:43 +0000 (0:00:00.308) 0:00:04.647 ********** 2026-04-13 01:21:54.101014 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.101031 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:54.101048 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:54.101074 | orchestrator | 2026-04-13 01:21:54.101093 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-13 01:21:54.101108 | orchestrator | Monday 13 April 2026 01:21:43 +0000 (0:00:00.349) 0:00:04.997 ********** 2026-04-13 01:21:54.101123 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.101139 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:54.101155 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:54.101173 | orchestrator | 2026-04-13 01:21:54.101190 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-13 01:21:54.101208 | orchestrator | Monday 13 April 2026 01:21:44 +0000 (0:00:00.297) 0:00:05.295 ********** 2026-04-13 01:21:54.101225 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.101242 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:21:54.101260 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:21:54.101279 | orchestrator | 2026-04-13 01:21:54.101297 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-13 01:21:54.101313 | orchestrator | Monday 13 April 2026 01:21:44 +0000 (0:00:00.477) 0:00:05.772 ********** 2026-04-13 01:21:54.101331 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.101343 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:21:54.101352 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:21:54.101362 | orchestrator | 2026-04-13 01:21:54.101371 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-13 01:21:54.101409 | orchestrator | Monday 13 April 2026 01:21:44 +0000 (0:00:00.310) 0:00:06.083 ********** 2026-04-13 01:21:54.101420 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.101429 | orchestrator | 2026-04-13 01:21:54.101439 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-13 01:21:54.101449 | orchestrator | Monday 13 April 2026 01:21:45 +0000 (0:00:00.246) 0:00:06.329 ********** 2026-04-13 01:21:54.101458 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.101468 | orchestrator | 2026-04-13 01:21:54.101478 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-13 01:21:54.101488 | orchestrator | Monday 13 April 2026 01:21:45 +0000 (0:00:00.248) 0:00:06.578 ********** 2026-04-13 01:21:54.101497 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.101507 | orchestrator | 2026-04-13 01:21:54.101517 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:54.101526 | orchestrator | Monday 13 April 2026 01:21:45 +0000 (0:00:00.247) 0:00:06.825 ********** 2026-04-13 01:21:54.101536 | orchestrator | 2026-04-13 01:21:54.101545 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:54.101555 | orchestrator | Monday 13 April 2026 01:21:45 +0000 (0:00:00.070) 0:00:06.896 ********** 2026-04-13 01:21:54.101576 | orchestrator | 2026-04-13 01:21:54.101586 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:54.101595 | orchestrator | Monday 13 April 2026 01:21:45 +0000 (0:00:00.082) 0:00:06.978 ********** 2026-04-13 01:21:54.101605 | orchestrator | 2026-04-13 01:21:54.101614 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-13 01:21:54.101624 | orchestrator | Monday 13 April 2026 01:21:46 +0000 (0:00:00.294) 0:00:07.272 ********** 2026-04-13 01:21:54.101633 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.101643 | orchestrator | 2026-04-13 01:21:54.101653 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-13 01:21:54.101662 | orchestrator | Monday 13 April 2026 01:21:46 +0000 (0:00:00.262) 0:00:07.534 ********** 2026-04-13 01:21:54.101672 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.101682 | orchestrator | 2026-04-13 01:21:54.101709 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-13 01:21:54.101719 | orchestrator | Monday 13 April 2026 01:21:46 +0000 (0:00:00.254) 0:00:07.788 ********** 2026-04-13 01:21:54.101729 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.101739 | orchestrator | 2026-04-13 01:21:54.101749 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-13 01:21:54.101759 | orchestrator | Monday 13 April 2026 01:21:46 +0000 (0:00:00.125) 0:00:07.914 ********** 2026-04-13 01:21:54.101768 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:21:54.101778 | orchestrator | 2026-04-13 01:21:54.101787 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-13 01:21:54.101797 | orchestrator | Monday 13 April 2026 01:21:48 +0000 (0:00:01.569) 0:00:09.484 ********** 2026-04-13 01:21:54.101807 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.101816 | orchestrator | 2026-04-13 01:21:54.101826 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-13 01:21:54.101836 | orchestrator | Monday 13 April 2026 01:21:48 +0000 (0:00:00.268) 0:00:09.752 ********** 2026-04-13 01:21:54.101846 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.101856 | orchestrator | 2026-04-13 01:21:54.101865 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-13 01:21:54.101875 | orchestrator | Monday 13 April 2026 01:21:48 +0000 (0:00:00.321) 0:00:10.073 ********** 2026-04-13 01:21:54.101885 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.101894 | orchestrator | 2026-04-13 01:21:54.101904 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-13 01:21:54.101913 | orchestrator | Monday 13 April 2026 01:21:48 +0000 (0:00:00.143) 0:00:10.216 ********** 2026-04-13 01:21:54.101923 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:21:54.101932 | orchestrator | 2026-04-13 01:21:54.101942 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-13 01:21:54.101952 | orchestrator | Monday 13 April 2026 01:21:49 +0000 (0:00:00.144) 0:00:10.361 ********** 2026-04-13 01:21:54.101962 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:54.101971 | orchestrator | 2026-04-13 01:21:54.101981 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-13 01:21:54.101990 | orchestrator | Monday 13 April 2026 01:21:49 +0000 (0:00:00.289) 0:00:10.650 ********** 2026-04-13 01:21:54.102000 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:21:54.102009 | orchestrator | 2026-04-13 01:21:54.102108 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-13 01:21:54.102119 | orchestrator | Monday 13 April 2026 01:21:49 +0000 (0:00:00.275) 0:00:10.926 ********** 2026-04-13 01:21:54.102138 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:54.102148 | orchestrator | 2026-04-13 01:21:54.102158 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-13 01:21:54.102167 | orchestrator | Monday 13 April 2026 01:21:51 +0000 (0:00:01.665) 0:00:12.591 ********** 2026-04-13 01:21:54.102177 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:54.102198 | orchestrator | 2026-04-13 01:21:54.102214 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-13 01:21:54.102239 | orchestrator | Monday 13 April 2026 01:21:51 +0000 (0:00:00.296) 0:00:12.887 ********** 2026-04-13 01:21:54.102255 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:54.102271 | orchestrator | 2026-04-13 01:21:54.102286 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:54.102301 | orchestrator | Monday 13 April 2026 01:21:51 +0000 (0:00:00.333) 0:00:13.221 ********** 2026-04-13 01:21:54.102317 | orchestrator | 2026-04-13 01:21:54.102333 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:54.102350 | orchestrator | Monday 13 April 2026 01:21:52 +0000 (0:00:00.070) 0:00:13.291 ********** 2026-04-13 01:21:54.102366 | orchestrator | 2026-04-13 01:21:54.102471 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:21:54.102483 | orchestrator | Monday 13 April 2026 01:21:52 +0000 (0:00:00.070) 0:00:13.361 ********** 2026-04-13 01:21:54.102493 | orchestrator | 2026-04-13 01:21:54.102503 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-13 01:21:54.102512 | orchestrator | Monday 13 April 2026 01:21:52 +0000 (0:00:00.077) 0:00:13.439 ********** 2026-04-13 01:21:54.102522 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-13 01:21:54.102532 | orchestrator | 2026-04-13 01:21:54.102541 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-13 01:21:54.102551 | orchestrator | Monday 13 April 2026 01:21:53 +0000 (0:00:01.426) 0:00:14.866 ********** 2026-04-13 01:21:54.102561 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-13 01:21:54.102571 | orchestrator |  "msg": [ 2026-04-13 01:21:54.102580 | orchestrator |  "Validator run completed.", 2026-04-13 01:21:54.102590 | orchestrator |  "You can find the report file here:", 2026-04-13 01:21:54.102600 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-13T01:21:40+00:00-report.json", 2026-04-13 01:21:54.102610 | orchestrator |  "on the following host:", 2026-04-13 01:21:54.102620 | orchestrator |  "testbed-manager" 2026-04-13 01:21:54.102630 | orchestrator |  ] 2026-04-13 01:21:54.102640 | orchestrator | } 2026-04-13 01:21:54.102650 | orchestrator | 2026-04-13 01:21:54.102659 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:21:54.102670 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 01:21:54.102681 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:21:54.102702 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:21:54.539272 | orchestrator | 2026-04-13 01:21:54.539445 | orchestrator | 2026-04-13 01:21:54.539466 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:21:54.539480 | orchestrator | Monday 13 April 2026 01:21:54 +0000 (0:00:00.444) 0:00:15.311 ********** 2026-04-13 01:21:54.539491 | orchestrator | =============================================================================== 2026-04-13 01:21:54.539502 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2026-04-13 01:21:54.539515 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.57s 2026-04-13 01:21:54.539534 | orchestrator | Get container info ------------------------------------------------------ 1.55s 2026-04-13 01:21:54.539553 | orchestrator | Write report file ------------------------------------------------------- 1.43s 2026-04-13 01:21:54.539572 | orchestrator | Get timestamp for report file ------------------------------------------- 1.06s 2026-04-13 01:21:54.539589 | orchestrator | Create report output directory ------------------------------------------ 0.74s 2026-04-13 01:21:54.539640 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.48s 2026-04-13 01:21:54.539660 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-04-13 01:21:54.539679 | orchestrator | Print report file information ------------------------------------------- 0.45s 2026-04-13 01:21:54.539694 | orchestrator | Set test result to passed if container is existing ---------------------- 0.35s 2026-04-13 01:21:54.539705 | orchestrator | Aggregate test results step three --------------------------------------- 0.33s 2026-04-13 01:21:54.539716 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-04-13 01:21:54.539728 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2026-04-13 01:21:54.539765 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-04-13 01:21:54.539785 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-04-13 01:21:54.539802 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-04-13 01:21:54.539821 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-04-13 01:21:54.539840 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-04-13 01:21:54.539861 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-04-13 01:21:54.539881 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2026-04-13 01:21:54.866522 | orchestrator | + osism validate ceph-osds 2026-04-13 01:22:14.615054 | orchestrator | 2026-04-13 01:22:14.615149 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-13 01:22:14.615158 | orchestrator | 2026-04-13 01:22:14.615164 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-13 01:22:14.615171 | orchestrator | Monday 13 April 2026 01:22:10 +0000 (0:00:00.563) 0:00:00.563 ********** 2026-04-13 01:22:14.615177 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:14.615183 | orchestrator | 2026-04-13 01:22:14.615189 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 01:22:14.615198 | orchestrator | Monday 13 April 2026 01:22:11 +0000 (0:00:01.072) 0:00:01.636 ********** 2026-04-13 01:22:14.615206 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:14.615215 | orchestrator | 2026-04-13 01:22:14.615224 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-13 01:22:14.615272 | orchestrator | Monday 13 April 2026 01:22:11 +0000 (0:00:00.265) 0:00:01.902 ********** 2026-04-13 01:22:14.615280 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:14.615290 | orchestrator | 2026-04-13 01:22:14.615298 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-13 01:22:14.615313 | orchestrator | Monday 13 April 2026 01:22:12 +0000 (0:00:00.734) 0:00:02.636 ********** 2026-04-13 01:22:14.615323 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:14.615333 | orchestrator | 2026-04-13 01:22:14.615341 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-13 01:22:14.615361 | orchestrator | Monday 13 April 2026 01:22:12 +0000 (0:00:00.138) 0:00:02.775 ********** 2026-04-13 01:22:14.615371 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:14.615380 | orchestrator | 2026-04-13 01:22:14.615389 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-13 01:22:14.615398 | orchestrator | Monday 13 April 2026 01:22:12 +0000 (0:00:00.140) 0:00:02.915 ********** 2026-04-13 01:22:14.615407 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:14.615462 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:14.615471 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:14.615480 | orchestrator | 2026-04-13 01:22:14.615489 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-13 01:22:14.615498 | orchestrator | Monday 13 April 2026 01:22:13 +0000 (0:00:00.465) 0:00:03.381 ********** 2026-04-13 01:22:14.615528 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:14.615537 | orchestrator | 2026-04-13 01:22:14.615546 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-13 01:22:14.615556 | orchestrator | Monday 13 April 2026 01:22:13 +0000 (0:00:00.157) 0:00:03.538 ********** 2026-04-13 01:22:14.615565 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:14.615574 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:14.615582 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:14.615591 | orchestrator | 2026-04-13 01:22:14.615599 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-13 01:22:14.615605 | orchestrator | Monday 13 April 2026 01:22:13 +0000 (0:00:00.328) 0:00:03.866 ********** 2026-04-13 01:22:14.615611 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:14.615616 | orchestrator | 2026-04-13 01:22:14.615622 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-13 01:22:14.615629 | orchestrator | Monday 13 April 2026 01:22:13 +0000 (0:00:00.358) 0:00:04.225 ********** 2026-04-13 01:22:14.615636 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:14.615642 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:14.615648 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:14.615655 | orchestrator | 2026-04-13 01:22:14.615661 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-13 01:22:14.615668 | orchestrator | Monday 13 April 2026 01:22:14 +0000 (0:00:00.334) 0:00:04.559 ********** 2026-04-13 01:22:14.615676 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7f42fe923b690ba1f06bd866fd85577e38f9fa3a2e156c3e0f3385ec513a389a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-13 01:22:14.615685 | orchestrator | skipping: [testbed-node-3] => (item={'id': '878996f61dce2fec9d6d9c5fd7f792791cf23b9bc9d49e12a4e66809ba4a5680', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-13 01:22:14.615692 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c11d8b3f0ddff303d904c5504141fce0a262e0db400c1e231363cc059bc370f5', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-13 01:22:14.615715 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f1c0559d684975e2cfea194a68ef1b5f5e6fc0481a19d1b0579eb18608eb7fa5', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-13 01:22:14.615724 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4d5f3589529ba7921361c8f3aff844cee13d0c49877a5c41a3996a92f04abdec', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-13 01:22:14.615745 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fd204dfbc0d664f9a3ebb1f0f159043719eb636370a9e106ed91d5b3322f4b51', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-13 01:22:14.615752 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7c150b26514f4c9b166667205a8950001c9505ee6631370e0dcafbc8b3622d91', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-13 01:22:14.615761 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1bddca838ba0776969a70cba73f174a0b15a7ef2abb09807abd229c94e734a8f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-13 01:22:14.615768 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e6b3bd2d521301ea036800cab7ad614d766237e58ebdec037840d0a9eb2a3eeb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-13 01:22:14.615780 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c5248ed06613b3f70d7fb478f5927ed8493487d78c4f66736fb628e27b2a4e5e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-13 01:22:14.615788 | orchestrator | ok: [testbed-node-3] => (item={'id': '9a66b869c5ef46a6ba24b02ae4b834e2009f27d17c1e0a55e4459fbba0f43d63', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-13 01:22:14.615796 | orchestrator | ok: [testbed-node-3] => (item={'id': '62b672e14dab851242aeea93ce2f9214e075ca498c6408fa9d0d8242fa9da835', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-13 01:22:14.615802 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65817df1605a247908d1dfcef02c7d7c35ae6e1fb52f3d1e1d25f6a354a9db5a', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-13 01:22:14.615808 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6ae72233683cee1be96bae2add5d09a9683e9b1635466d4747c821ba2d581d01', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-13 01:22:14.615815 | orchestrator | skipping: [testbed-node-3] => (item={'id': '067b17fbc03fb8c3879ffc59a656542c3d64eefd58316be64cb66c1f73bbab8c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-13 01:22:14.615821 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5dcdcb628f3d2d3da7bc76d237dd8db18271a3ec89ad4c4894bf1653819599bb', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-13 01:22:14.615827 | orchestrator | skipping: [testbed-node-3] => (item={'id': '35f433615941a42ef061bff69bf46c53fff602d26a3488fe4d21074a519efd64', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-13 01:22:14.615834 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c65e238591b01213694b39a3b35012ff2aaa885b57f05b0417f1d91236bc13fd', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-13 01:22:14.615840 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dd4347437136f5fd01cec621617de79a3f66ef467a03a6f2c408e14a2b4789b4', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-13 01:22:14.615847 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1fa418da97dfca22d056061c92954cbc60b718bbc6ce681a1b598431839b0e07', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-13 01:22:14.615859 | orchestrator | skipping: [testbed-node-4] => (item={'id': '595cd785cef75f0e5164170f11135663a5f89f234c02b5afaf6eb6e9c37dc37d', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-13 01:22:14.615870 | orchestrator | skipping: [testbed-node-4] => (item={'id': '26b2ca36d054f43d12e20c38baf39e4f7231e6d32311446d984f003139445d28', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-13 01:22:14.773273 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8cef18f6bd0395744a61af4a2331a8901e9798e6d55eb51626729d1848ed10db', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-13 01:22:14.773554 | orchestrator | skipping: [testbed-node-4] => (item={'id': '45b29e1c2b1d2dbc05b7f4661817f9027e85d8206070ccdf2a0fa4ef747579e8', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-13 01:22:14.773593 | orchestrator | skipping: [testbed-node-4] => (item={'id': '70815e9ed38ac93e40e429e1c4c4c722ae948d55e816a15b48350f895b58fd93', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-13 01:22:14.773613 | orchestrator | skipping: [testbed-node-4] => (item={'id': '509de888a76f8a26cab8ac5cba3685e6cb0d735b9e7a6fd874b0a43b7e85a78a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-13 01:22:14.773634 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ebf99cf4719a948821efef328199694059e367b5d114aca9617bb0b4215a465e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-13 01:22:14.773652 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dc48465f3f157123d3d3c43a4f4c92ee9d21ae246b4c97042ee63723aff72d4b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-13 01:22:14.773674 | orchestrator | ok: [testbed-node-4] => (item={'id': '8c09e4ca5ea7bb01c0688c2d873c7b733abf2988c17c57888b1e7fc955b93ee3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-13 01:22:14.773695 | orchestrator | ok: [testbed-node-4] => (item={'id': 'aa03c632df6ca4b28e4d502f502b2be0240514f9183ece1089e798cc3e1e9773', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-13 01:22:14.773714 | orchestrator | skipping: [testbed-node-4] => (item={'id': '54c5af0e6d4bfea52e8a3bcf941a2005e583fc94d7f5bb9af1662623a149b032', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-13 01:22:14.773733 | orchestrator | skipping: [testbed-node-4] => (item={'id': '07926e82da64b5e085958266a5171c4d851a02c4cf7551a00a4cc702060af89d', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-13 01:22:14.773751 | orchestrator | skipping: [testbed-node-4] => (item={'id': '80605f8dc3a47a76e18e2560b3472749ecd0d9bfd511ec3f6e5e042517e33430', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-13 01:22:14.773768 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e31b6e9fb3ffec1561c10f7ccd1391b48378b206a524a1261cfd8d83e88ee7b9', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-13 01:22:14.773787 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5da939963151e98abf2cc149de0bbcf143c853fa15a24a134e0d89f6de5c6257', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-13 01:22:14.773826 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c6b8f5093ebf5bc8088a227f1b112e4b5ee7ea75e074847dcedeb8ebfc75a41', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-13 01:22:14.773847 | orchestrator | skipping: [testbed-node-5] => (item={'id': '012e1737c9e9f66bc3ef7557e47b18192e7b09937e816bb4114a87cf561bd212', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-13 01:22:14.773907 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'baee13f13bc1d6081e3e133fbb76e32ac4ba159f7b070315dc9c0df23612d204', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-13 01:22:14.773932 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3540dc3fd7ee832ec52e16dbed079b586cbe898db462e13c684d83a382f72a5d', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-13 01:22:14.773951 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7bf1e680fc9569d560d136b10dc02253237d87ce0b844b12c82f1ea58431e70f', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-13 01:22:14.773970 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6435257f1385fc970ba3b9bd132dd05dbd666dbbfc176d2e60cedaf5b8312bc6', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-13 01:22:14.773990 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e1e469e3405584cf1a2b8ca63ac8bea7fc72170f1b055ea91b8e82d3d9029a55', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-13 01:22:14.774009 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2e126627402dccc0fd82f90d725cf9972c00a64f931d2b8cfcef358342dc05ba', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-13 01:22:14.774105 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c8dabdfebef085920c47aa01100c02baebd2370a00e47c2d29a002ca789779cd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-13 01:22:14.774124 | orchestrator | skipping: [testbed-node-5] => (item={'id': '197e2cadc3c9420e1d90c11ed7426ad7f5a128fc9b58c56543bad8d3acb876b0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-13 01:22:14.774145 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2535efab095263c197262bfad228e176389b8d5e169f2b83cedefea55854cb8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-13 01:22:14.774164 | orchestrator | ok: [testbed-node-5] => (item={'id': '3e19c516c1220512039736c95c0cd7b5be81a208946ab0e174da6ab1280fd3cd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-13 01:22:14.774184 | orchestrator | ok: [testbed-node-5] => (item={'id': 'cdcac79ebd1e4f63149bc19a8e63e42be952fb9f28c39ac13dfe2ab4c85778fb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-13 01:22:14.774200 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ffa9173ad950391af0133daa7db0c205cd41f06e6578ac1cc76a5724e0d8550f', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-13 01:22:14.774219 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ba106885ed1960c688772fb3d21adb38066019e4d87813138b0db3273dc495d9', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-13 01:22:14.774248 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5bbb5f4aceacf3903d715a5c91c2fd16af238f8014f1cbc34a67e9018d2ca667', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-13 01:22:14.774281 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6f6673cf5cbfd3e95b3618c708d07c2cab432a197cca3f773c542ff18e5f1324', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-13 01:22:14.774300 | orchestrator | skipping: [testbed-node-5] => (item={'id': '24392f093e5b4a49de1a8718da037d05cbf224901d2ba619d8ee31c1a17a026d', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-13 01:22:14.774337 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ecc326a2d199f0981d1812a0e78e982a88eb21c9cd463e6650d17f7c65c8d5f2', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-13 01:22:28.307691 | orchestrator | 2026-04-13 01:22:28.307803 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-13 01:22:28.307819 | orchestrator | Monday 13 April 2026 01:22:14 +0000 (0:00:00.536) 0:00:05.096 ********** 2026-04-13 01:22:28.307829 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.307840 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.307851 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.307876 | orchestrator | 2026-04-13 01:22:28.307887 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-13 01:22:28.307906 | orchestrator | Monday 13 April 2026 01:22:15 +0000 (0:00:00.484) 0:00:05.581 ********** 2026-04-13 01:22:28.307917 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.307927 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:28.307937 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:28.307946 | orchestrator | 2026-04-13 01:22:28.307956 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-13 01:22:28.307966 | orchestrator | Monday 13 April 2026 01:22:15 +0000 (0:00:00.300) 0:00:05.881 ********** 2026-04-13 01:22:28.307976 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.307986 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.307995 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.308005 | orchestrator | 2026-04-13 01:22:28.308015 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-13 01:22:28.308025 | orchestrator | Monday 13 April 2026 01:22:15 +0000 (0:00:00.314) 0:00:06.196 ********** 2026-04-13 01:22:28.308034 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.308044 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.308053 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.308063 | orchestrator | 2026-04-13 01:22:28.308073 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-13 01:22:28.308083 | orchestrator | Monday 13 April 2026 01:22:16 +0000 (0:00:00.498) 0:00:06.694 ********** 2026-04-13 01:22:28.308093 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-13 01:22:28.308104 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-13 01:22:28.308114 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308124 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-13 01:22:28.308133 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-13 01:22:28.308143 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:28.308153 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-13 01:22:28.308162 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-13 01:22:28.308172 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:28.308182 | orchestrator | 2026-04-13 01:22:28.308192 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-13 01:22:28.308201 | orchestrator | Monday 13 April 2026 01:22:16 +0000 (0:00:00.327) 0:00:07.022 ********** 2026-04-13 01:22:28.308231 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.308243 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.308254 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.308264 | orchestrator | 2026-04-13 01:22:28.308276 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-13 01:22:28.308286 | orchestrator | Monday 13 April 2026 01:22:17 +0000 (0:00:00.328) 0:00:07.350 ********** 2026-04-13 01:22:28.308297 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308308 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:28.308319 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:28.308330 | orchestrator | 2026-04-13 01:22:28.308341 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-13 01:22:28.308352 | orchestrator | Monday 13 April 2026 01:22:17 +0000 (0:00:00.287) 0:00:07.637 ********** 2026-04-13 01:22:28.308361 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308371 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:28.308380 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:28.308390 | orchestrator | 2026-04-13 01:22:28.308399 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-13 01:22:28.308409 | orchestrator | Monday 13 April 2026 01:22:17 +0000 (0:00:00.487) 0:00:08.125 ********** 2026-04-13 01:22:28.308418 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.308428 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.308438 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.308448 | orchestrator | 2026-04-13 01:22:28.308457 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-13 01:22:28.308492 | orchestrator | Monday 13 April 2026 01:22:18 +0000 (0:00:00.304) 0:00:08.430 ********** 2026-04-13 01:22:28.308503 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308513 | orchestrator | 2026-04-13 01:22:28.308523 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-13 01:22:28.308532 | orchestrator | Monday 13 April 2026 01:22:18 +0000 (0:00:00.276) 0:00:08.707 ********** 2026-04-13 01:22:28.308542 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308552 | orchestrator | 2026-04-13 01:22:28.308561 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-13 01:22:28.308571 | orchestrator | Monday 13 April 2026 01:22:18 +0000 (0:00:00.252) 0:00:08.959 ********** 2026-04-13 01:22:28.308580 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308610 | orchestrator | 2026-04-13 01:22:28.308620 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:22:28.308630 | orchestrator | Monday 13 April 2026 01:22:19 +0000 (0:00:00.285) 0:00:09.244 ********** 2026-04-13 01:22:28.308640 | orchestrator | 2026-04-13 01:22:28.308649 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:22:28.308658 | orchestrator | Monday 13 April 2026 01:22:19 +0000 (0:00:00.065) 0:00:09.309 ********** 2026-04-13 01:22:28.308668 | orchestrator | 2026-04-13 01:22:28.308678 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:22:28.308704 | orchestrator | Monday 13 April 2026 01:22:19 +0000 (0:00:00.067) 0:00:09.377 ********** 2026-04-13 01:22:28.308714 | orchestrator | 2026-04-13 01:22:28.308723 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-13 01:22:28.308733 | orchestrator | Monday 13 April 2026 01:22:19 +0000 (0:00:00.070) 0:00:09.448 ********** 2026-04-13 01:22:28.308742 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308752 | orchestrator | 2026-04-13 01:22:28.308762 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-13 01:22:28.308771 | orchestrator | Monday 13 April 2026 01:22:19 +0000 (0:00:00.478) 0:00:09.926 ********** 2026-04-13 01:22:28.308781 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.308790 | orchestrator | 2026-04-13 01:22:28.308800 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-13 01:22:28.308810 | orchestrator | Monday 13 April 2026 01:22:20 +0000 (0:00:00.724) 0:00:10.650 ********** 2026-04-13 01:22:28.308826 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.308836 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.308845 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.308855 | orchestrator | 2026-04-13 01:22:28.308908 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-13 01:22:28.308919 | orchestrator | Monday 13 April 2026 01:22:20 +0000 (0:00:00.321) 0:00:10.972 ********** 2026-04-13 01:22:28.308929 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.308939 | orchestrator | 2026-04-13 01:22:28.308948 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-13 01:22:28.308958 | orchestrator | Monday 13 April 2026 01:22:21 +0000 (0:00:00.289) 0:00:11.261 ********** 2026-04-13 01:22:28.308967 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 01:22:28.308977 | orchestrator | 2026-04-13 01:22:28.308987 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-13 01:22:28.308996 | orchestrator | Monday 13 April 2026 01:22:23 +0000 (0:00:02.033) 0:00:13.294 ********** 2026-04-13 01:22:28.309006 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.309016 | orchestrator | 2026-04-13 01:22:28.309025 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-13 01:22:28.309035 | orchestrator | Monday 13 April 2026 01:22:23 +0000 (0:00:00.154) 0:00:13.449 ********** 2026-04-13 01:22:28.309045 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.309054 | orchestrator | 2026-04-13 01:22:28.309064 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-13 01:22:28.309073 | orchestrator | Monday 13 April 2026 01:22:23 +0000 (0:00:00.310) 0:00:13.759 ********** 2026-04-13 01:22:28.309083 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.309093 | orchestrator | 2026-04-13 01:22:28.309102 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-13 01:22:28.309112 | orchestrator | Monday 13 April 2026 01:22:23 +0000 (0:00:00.119) 0:00:13.879 ********** 2026-04-13 01:22:28.309122 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.309131 | orchestrator | 2026-04-13 01:22:28.309141 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-13 01:22:28.309150 | orchestrator | Monday 13 April 2026 01:22:23 +0000 (0:00:00.124) 0:00:14.004 ********** 2026-04-13 01:22:28.309160 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.309170 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.309180 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.309189 | orchestrator | 2026-04-13 01:22:28.309199 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-13 01:22:28.309208 | orchestrator | Monday 13 April 2026 01:22:24 +0000 (0:00:00.536) 0:00:14.540 ********** 2026-04-13 01:22:28.309218 | orchestrator | changed: [testbed-node-3] 2026-04-13 01:22:28.309227 | orchestrator | changed: [testbed-node-4] 2026-04-13 01:22:28.309237 | orchestrator | changed: [testbed-node-5] 2026-04-13 01:22:28.309247 | orchestrator | 2026-04-13 01:22:28.309256 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-13 01:22:28.309266 | orchestrator | Monday 13 April 2026 01:22:26 +0000 (0:00:01.773) 0:00:16.313 ********** 2026-04-13 01:22:28.309276 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.309286 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.309295 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.309305 | orchestrator | 2026-04-13 01:22:28.309314 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-13 01:22:28.309324 | orchestrator | Monday 13 April 2026 01:22:26 +0000 (0:00:00.312) 0:00:16.625 ********** 2026-04-13 01:22:28.309334 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.309343 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.309353 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.309363 | orchestrator | 2026-04-13 01:22:28.309372 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-13 01:22:28.309382 | orchestrator | Monday 13 April 2026 01:22:26 +0000 (0:00:00.493) 0:00:17.119 ********** 2026-04-13 01:22:28.309399 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.309409 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:28.309423 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:28.309433 | orchestrator | 2026-04-13 01:22:28.309443 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-13 01:22:28.309452 | orchestrator | Monday 13 April 2026 01:22:27 +0000 (0:00:00.499) 0:00:17.618 ********** 2026-04-13 01:22:28.309462 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:28.309505 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:28.309521 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:28.309538 | orchestrator | 2026-04-13 01:22:28.309555 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-13 01:22:28.309571 | orchestrator | Monday 13 April 2026 01:22:27 +0000 (0:00:00.312) 0:00:17.931 ********** 2026-04-13 01:22:28.309583 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.309593 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:28.309602 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:28.309612 | orchestrator | 2026-04-13 01:22:28.309621 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-13 01:22:28.309631 | orchestrator | Monday 13 April 2026 01:22:27 +0000 (0:00:00.303) 0:00:18.234 ********** 2026-04-13 01:22:28.309640 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:28.309650 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:28.309659 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:28.309669 | orchestrator | 2026-04-13 01:22:28.309685 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-13 01:22:36.097200 | orchestrator | Monday 13 April 2026 01:22:28 +0000 (0:00:00.304) 0:00:18.539 ********** 2026-04-13 01:22:36.097315 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:36.097334 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:36.097347 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:36.097359 | orchestrator | 2026-04-13 01:22:36.097372 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-13 01:22:36.097385 | orchestrator | Monday 13 April 2026 01:22:29 +0000 (0:00:00.735) 0:00:19.274 ********** 2026-04-13 01:22:36.097397 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:36.097408 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:36.097420 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:36.097431 | orchestrator | 2026-04-13 01:22:36.097443 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-13 01:22:36.097455 | orchestrator | Monday 13 April 2026 01:22:29 +0000 (0:00:00.488) 0:00:19.763 ********** 2026-04-13 01:22:36.097466 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:36.097478 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:36.097489 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:36.097570 | orchestrator | 2026-04-13 01:22:36.097589 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-13 01:22:36.097677 | orchestrator | Monday 13 April 2026 01:22:29 +0000 (0:00:00.315) 0:00:20.079 ********** 2026-04-13 01:22:36.097693 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:36.097706 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:22:36.097717 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:22:36.097728 | orchestrator | 2026-04-13 01:22:36.097742 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-13 01:22:36.097755 | orchestrator | Monday 13 April 2026 01:22:30 +0000 (0:00:00.536) 0:00:20.616 ********** 2026-04-13 01:22:36.097769 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:22:36.097782 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:22:36.097796 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:22:36.097808 | orchestrator | 2026-04-13 01:22:36.097821 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-13 01:22:36.097834 | orchestrator | Monday 13 April 2026 01:22:30 +0000 (0:00:00.318) 0:00:20.934 ********** 2026-04-13 01:22:36.097847 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:36.097886 | orchestrator | 2026-04-13 01:22:36.097900 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-13 01:22:36.097913 | orchestrator | Monday 13 April 2026 01:22:30 +0000 (0:00:00.266) 0:00:21.200 ********** 2026-04-13 01:22:36.097924 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:22:36.097935 | orchestrator | 2026-04-13 01:22:36.097946 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-13 01:22:36.097957 | orchestrator | Monday 13 April 2026 01:22:31 +0000 (0:00:00.315) 0:00:21.516 ********** 2026-04-13 01:22:36.097967 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:36.097978 | orchestrator | 2026-04-13 01:22:36.097989 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-13 01:22:36.098000 | orchestrator | Monday 13 April 2026 01:22:33 +0000 (0:00:01.767) 0:00:23.284 ********** 2026-04-13 01:22:36.098011 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:36.098084 | orchestrator | 2026-04-13 01:22:36.098096 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-13 01:22:36.098106 | orchestrator | Monday 13 April 2026 01:22:33 +0000 (0:00:00.275) 0:00:23.559 ********** 2026-04-13 01:22:36.098117 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:36.098128 | orchestrator | 2026-04-13 01:22:36.098139 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:22:36.098182 | orchestrator | Monday 13 April 2026 01:22:33 +0000 (0:00:00.286) 0:00:23.846 ********** 2026-04-13 01:22:36.098195 | orchestrator | 2026-04-13 01:22:36.098206 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:22:36.098217 | orchestrator | Monday 13 April 2026 01:22:33 +0000 (0:00:00.067) 0:00:23.913 ********** 2026-04-13 01:22:36.098228 | orchestrator | 2026-04-13 01:22:36.098239 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-13 01:22:36.098250 | orchestrator | Monday 13 April 2026 01:22:33 +0000 (0:00:00.068) 0:00:23.982 ********** 2026-04-13 01:22:36.098261 | orchestrator | 2026-04-13 01:22:36.098272 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-13 01:22:36.098283 | orchestrator | Monday 13 April 2026 01:22:33 +0000 (0:00:00.247) 0:00:24.230 ********** 2026-04-13 01:22:36.098294 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 01:22:36.098305 | orchestrator | 2026-04-13 01:22:36.098316 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-13 01:22:36.098340 | orchestrator | Monday 13 April 2026 01:22:35 +0000 (0:00:01.399) 0:00:25.629 ********** 2026-04-13 01:22:36.098352 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-13 01:22:36.098363 | orchestrator |  "msg": [ 2026-04-13 01:22:36.098374 | orchestrator |  "Validator run completed.", 2026-04-13 01:22:36.098385 | orchestrator |  "You can find the report file here:", 2026-04-13 01:22:36.098396 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-13T01:22:11+00:00-report.json", 2026-04-13 01:22:36.098408 | orchestrator |  "on the following host:", 2026-04-13 01:22:36.098419 | orchestrator |  "testbed-manager" 2026-04-13 01:22:36.098430 | orchestrator |  ] 2026-04-13 01:22:36.098441 | orchestrator | } 2026-04-13 01:22:36.098452 | orchestrator | 2026-04-13 01:22:36.098463 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:22:36.098475 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-13 01:22:36.098487 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 01:22:36.098568 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 01:22:36.098593 | orchestrator | 2026-04-13 01:22:36.098605 | orchestrator | 2026-04-13 01:22:36.098616 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:22:36.098628 | orchestrator | Monday 13 April 2026 01:22:35 +0000 (0:00:00.407) 0:00:26.036 ********** 2026-04-13 01:22:36.098639 | orchestrator | =============================================================================== 2026-04-13 01:22:36.098650 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.03s 2026-04-13 01:22:36.098662 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.77s 2026-04-13 01:22:36.098673 | orchestrator | Aggregate test results step one ----------------------------------------- 1.77s 2026-04-13 01:22:36.098684 | orchestrator | Write report file ------------------------------------------------------- 1.40s 2026-04-13 01:22:36.098695 | orchestrator | Get timestamp for report file ------------------------------------------- 1.07s 2026-04-13 01:22:36.098706 | orchestrator | Prepare test data ------------------------------------------------------- 0.74s 2026-04-13 01:22:36.098717 | orchestrator | Create report output directory ------------------------------------------ 0.73s 2026-04-13 01:22:36.098728 | orchestrator | Fail early due to containers not running -------------------------------- 0.72s 2026-04-13 01:22:36.098739 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.54s 2026-04-13 01:22:36.098750 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2026-04-13 01:22:36.098761 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.54s 2026-04-13 01:22:36.098771 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.50s 2026-04-13 01:22:36.098783 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-04-13 01:22:36.098793 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2026-04-13 01:22:36.098804 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.49s 2026-04-13 01:22:36.098815 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.49s 2026-04-13 01:22:36.098826 | orchestrator | Get count of ceph-osd containers on host -------------------------------- 0.48s 2026-04-13 01:22:36.098837 | orchestrator | Print report file information ------------------------------------------- 0.48s 2026-04-13 01:22:36.098848 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.47s 2026-04-13 01:22:36.098859 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-04-13 01:22:36.300866 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-13 01:22:36.308339 | orchestrator | + set -e 2026-04-13 01:22:36.308464 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 01:22:36.308483 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 01:22:36.308525 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 01:22:36.308537 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 01:22:36.308549 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 01:22:36.308560 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 01:22:36.308572 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 01:22:36.308583 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 01:22:36.308594 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 01:22:36.308618 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 01:22:36.308641 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 01:22:36.308652 | orchestrator | ++ export ARA=false 2026-04-13 01:22:36.308664 | orchestrator | ++ ARA=false 2026-04-13 01:22:36.308674 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 01:22:36.308685 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 01:22:36.308696 | orchestrator | ++ export TEMPEST=true 2026-04-13 01:22:36.308706 | orchestrator | ++ TEMPEST=true 2026-04-13 01:22:36.308717 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 01:22:36.308728 | orchestrator | ++ IS_ZUUL=true 2026-04-13 01:22:36.308738 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 01:22:36.308750 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 01:22:36.308760 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 01:22:36.308771 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 01:22:36.308782 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 01:22:36.308818 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 01:22:36.308829 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 01:22:36.308840 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 01:22:36.308855 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 01:22:36.308874 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 01:22:36.308892 | orchestrator | + source /etc/os-release 2026-04-13 01:22:36.308910 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-13 01:22:36.308928 | orchestrator | ++ NAME=Ubuntu 2026-04-13 01:22:36.308945 | orchestrator | ++ VERSION_ID=24.04 2026-04-13 01:22:36.308962 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-13 01:22:36.308979 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-13 01:22:36.308997 | orchestrator | ++ ID=ubuntu 2026-04-13 01:22:36.309044 | orchestrator | ++ ID_LIKE=debian 2026-04-13 01:22:36.309063 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-13 01:22:36.309080 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-13 01:22:36.309097 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-13 01:22:36.309704 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-13 01:22:36.309733 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-13 01:22:36.309750 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-13 01:22:36.309768 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-13 01:22:36.309788 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-13 01:22:36.309808 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-13 01:22:36.342416 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-13 01:23:01.981549 | orchestrator | 2026-04-13 01:23:01.981745 | orchestrator | # Status of Elasticsearch 2026-04-13 01:23:01.981762 | orchestrator | 2026-04-13 01:23:01.981775 | orchestrator | + pushd /opt/configuration/contrib 2026-04-13 01:23:01.981789 | orchestrator | + echo 2026-04-13 01:23:01.981801 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-13 01:23:01.981812 | orchestrator | + echo 2026-04-13 01:23:01.981825 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-13 01:23:02.165931 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-13 01:23:02.166209 | orchestrator | 2026-04-13 01:23:02.166228 | orchestrator | # Status of MariaDB 2026-04-13 01:23:02.166234 | orchestrator | 2026-04-13 01:23:02.166238 | orchestrator | + echo 2026-04-13 01:23:02.166243 | orchestrator | + echo '# Status of MariaDB' 2026-04-13 01:23:02.166247 | orchestrator | + echo 2026-04-13 01:23:02.167168 | orchestrator | ++ semver latest 10.0.0-0 2026-04-13 01:23:02.221023 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 01:23:02.221124 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 01:23:02.221140 | orchestrator | + osism status database 2026-04-13 01:23:03.848526 | orchestrator | 2026-04-13 01:23:03 | ERROR  | Unable to get ansible vault password 2026-04-13 01:23:03.848790 | orchestrator | 2026-04-13 01:23:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:23:03.848814 | orchestrator | 2026-04-13 01:23:03 | ERROR  | Dropping encrypted entries 2026-04-13 01:23:03.881479 | orchestrator | 2026-04-13 01:23:03 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-13 01:23:03.892713 | orchestrator | 2026-04-13 01:23:03 | INFO  | Cluster Status: Primary 2026-04-13 01:23:03.892798 | orchestrator | 2026-04-13 01:23:03 | INFO  | Connected: ON 2026-04-13 01:23:03.892812 | orchestrator | 2026-04-13 01:23:03 | INFO  | Ready: ON 2026-04-13 01:23:03.892823 | orchestrator | 2026-04-13 01:23:03 | INFO  | Cluster Size: 3 2026-04-13 01:23:03.892835 | orchestrator | 2026-04-13 01:23:03 | INFO  | Local State: Synced 2026-04-13 01:23:03.892846 | orchestrator | 2026-04-13 01:23:03 | INFO  | Cluster State UUID: c22c0f48-36d3-11f1-ae41-e6172b2f0837 2026-04-13 01:23:03.893049 | orchestrator | 2026-04-13 01:23:03 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-13 01:23:03.893149 | orchestrator | 2026-04-13 01:23:03 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-13 01:23:03.893169 | orchestrator | 2026-04-13 01:23:03 | INFO  | Local Node UUID: f6c7254d-36d3-11f1-b022-b6851ab446c2 2026-04-13 01:23:03.893186 | orchestrator | 2026-04-13 01:23:03 | INFO  | Flow Control Paused: 0.00% 2026-04-13 01:23:03.893204 | orchestrator | 2026-04-13 01:23:03 | INFO  | Recv Queue Avg: 0.010101 2026-04-13 01:23:03.893238 | orchestrator | 2026-04-13 01:23:03 | INFO  | Send Queue Avg: 0.00144238 2026-04-13 01:23:03.893258 | orchestrator | 2026-04-13 01:23:03 | INFO  | Transactions: 4688 local commits, 6873 replicated, 99 received 2026-04-13 01:23:03.893276 | orchestrator | 2026-04-13 01:23:03 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-13 01:23:03.894208 | orchestrator | 2026-04-13 01:23:03 | INFO  | MariaDB Uptime: 24 minutes 2026-04-13 01:23:03.894240 | orchestrator | 2026-04-13 01:23:03 | INFO  | Threads: 132 connected, 1 running 2026-04-13 01:23:03.894251 | orchestrator | 2026-04-13 01:23:03 | INFO  | Queries: 229510 total, 0 slow 2026-04-13 01:23:03.894262 | orchestrator | 2026-04-13 01:23:03 | INFO  | Aborted Connects: 146 2026-04-13 01:23:03.894273 | orchestrator | 2026-04-13 01:23:03 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-13 01:23:04.173788 | orchestrator | 2026-04-13 01:23:04.173903 | orchestrator | # Status of Prometheus 2026-04-13 01:23:04.173925 | orchestrator | 2026-04-13 01:23:04.173943 | orchestrator | + echo 2026-04-13 01:23:04.173959 | orchestrator | + echo '# Status of Prometheus' 2026-04-13 01:23:04.173976 | orchestrator | + echo 2026-04-13 01:23:04.173993 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-13 01:23:04.231953 | orchestrator | Unauthorized 2026-04-13 01:23:04.235209 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-13 01:23:04.283799 | orchestrator | Unauthorized 2026-04-13 01:23:04.290093 | orchestrator | 2026-04-13 01:23:04.290154 | orchestrator | # Status of RabbitMQ 2026-04-13 01:23:04.290161 | orchestrator | 2026-04-13 01:23:04.290166 | orchestrator | + echo 2026-04-13 01:23:04.290171 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-13 01:23:04.290176 | orchestrator | + echo 2026-04-13 01:23:04.291788 | orchestrator | ++ semver latest 10.0.0-0 2026-04-13 01:23:04.359057 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-13 01:23:04.359155 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 01:23:04.359170 | orchestrator | + osism status messaging 2026-04-13 01:23:11.888004 | orchestrator | 2026-04-13 01:23:11 | ERROR  | Unable to get ansible vault password 2026-04-13 01:23:11.888089 | orchestrator | 2026-04-13 01:23:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:23:11.888098 | orchestrator | 2026-04-13 01:23:11 | ERROR  | Dropping encrypted entries 2026-04-13 01:23:11.922450 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-13 01:23:11.981353 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-13 01:23:11.981609 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-13 01:23:11.981827 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-13 01:23:11.981857 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-13 01:23:11.981879 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-13 01:23:11.981900 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-13 01:23:11.981948 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-13 01:23:11.981962 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Connections: 206, Channels: 205, Queues: 173 2026-04-13 01:23:11.981990 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Messages: 235 total, 234 ready, 1 unacked 2026-04-13 01:23:11.982002 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Message Rates: 6.2/s publish, 7.4/s deliver 2026-04-13 01:23:11.982071 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Disk Free: 57.5 GB (limit: 0.0 GB) 2026-04-13 01:23:11.982084 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-13 01:23:11.982095 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] File Descriptors: 121/1024 2026-04-13 01:23:11.982747 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-0] Sockets: 73/832 2026-04-13 01:23:11.982829 | orchestrator | 2026-04-13 01:23:11 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-13 01:23:12.050718 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-13 01:23:12.050825 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-13 01:23:12.050863 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-13 01:23:12.050869 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-13 01:23:12.050875 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-13 01:23:12.050881 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-13 01:23:12.050888 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-13 01:23:12.050895 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Connections: 206, Channels: 205, Queues: 173 2026-04-13 01:23:12.050902 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Messages: 235 total, 234 ready, 1 unacked 2026-04-13 01:23:12.050906 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Message Rates: 6.2/s publish, 7.4/s deliver 2026-04-13 01:23:12.050918 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Disk Free: 57.9 GB (limit: 0.0 GB) 2026-04-13 01:23:12.050923 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-13 01:23:12.051251 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] File Descriptors: 118/1024 2026-04-13 01:23:12.051441 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-1] Sockets: 72/832 2026-04-13 01:23:12.051453 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-13 01:23:12.142114 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-13 01:23:12.142238 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-13 01:23:12.142404 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-13 01:23:12.142540 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-13 01:23:12.142583 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-13 01:23:12.142675 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-13 01:23:12.142690 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-13 01:23:12.142716 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Connections: 206, Channels: 205, Queues: 173 2026-04-13 01:23:12.142728 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Messages: 235 total, 234 ready, 1 unacked 2026-04-13 01:23:12.142739 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Message Rates: 6.2/s publish, 7.4/s deliver 2026-04-13 01:23:12.142987 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Disk Free: 57.8 GB (limit: 0.0 GB) 2026-04-13 01:23:12.143297 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-13 01:23:12.143474 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] File Descriptors: 107/1024 2026-04-13 01:23:12.143756 | orchestrator | 2026-04-13 01:23:12 | INFO  | [testbed-node-2] Sockets: 61/832 2026-04-13 01:23:12.144140 | orchestrator | 2026-04-13 01:23:12 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-13 01:23:12.409845 | orchestrator | 2026-04-13 01:23:12.409923 | orchestrator | # Status of Redis 2026-04-13 01:23:12.409933 | orchestrator | 2026-04-13 01:23:12.409941 | orchestrator | + echo 2026-04-13 01:23:12.409949 | orchestrator | + echo '# Status of Redis' 2026-04-13 01:23:12.409957 | orchestrator | + echo 2026-04-13 01:23:12.409966 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-13 01:23:12.413799 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001047s;;;0.000000;10.000000 2026-04-13 01:23:12.413854 | orchestrator | + popd 2026-04-13 01:23:12.413864 | orchestrator | 2026-04-13 01:23:12.413874 | orchestrator | # Create backup of MariaDB database 2026-04-13 01:23:12.413885 | orchestrator | 2026-04-13 01:23:12.413893 | orchestrator | + echo 2026-04-13 01:23:12.413903 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-13 01:23:12.413912 | orchestrator | + echo 2026-04-13 01:23:12.413922 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-13 01:23:13.826515 | orchestrator | 2026-04-13 01:23:13 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-13 01:23:13.890543 | orchestrator | 2026-04-13 01:23:13 | INFO  | Task a83406be-0855-4030-82e2-3b1b45fc8534 (mariadb_backup) was prepared for execution. 2026-04-13 01:23:13.890659 | orchestrator | 2026-04-13 01:23:13 | INFO  | It takes a moment until task a83406be-0855-4030-82e2-3b1b45fc8534 (mariadb_backup) has been started and output is visible here. 2026-04-13 01:24:00.640323 | orchestrator | 2026-04-13 01:24:00.640437 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:24:00.640452 | orchestrator | 2026-04-13 01:24:00.640465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:24:00.640477 | orchestrator | Monday 13 April 2026 01:23:17 +0000 (0:00:00.249) 0:00:00.249 ********** 2026-04-13 01:24:00.640488 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:24:00.640500 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:24:00.640512 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:24:00.640523 | orchestrator | 2026-04-13 01:24:00.640535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:24:00.640546 | orchestrator | Monday 13 April 2026 01:23:17 +0000 (0:00:00.334) 0:00:00.584 ********** 2026-04-13 01:24:00.640557 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-13 01:24:00.640569 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-13 01:24:00.640581 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-13 01:24:00.640616 | orchestrator | 2026-04-13 01:24:00.640628 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-13 01:24:00.640647 | orchestrator | 2026-04-13 01:24:00.640665 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-13 01:24:00.640709 | orchestrator | Monday 13 April 2026 01:23:18 +0000 (0:00:00.491) 0:00:01.075 ********** 2026-04-13 01:24:00.640730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 01:24:00.640748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-13 01:24:00.640766 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-13 01:24:00.640899 | orchestrator | 2026-04-13 01:24:00.640924 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 01:24:00.640949 | orchestrator | Monday 13 April 2026 01:23:18 +0000 (0:00:00.475) 0:00:01.551 ********** 2026-04-13 01:24:00.640972 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:24:00.640993 | orchestrator | 2026-04-13 01:24:00.641014 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-13 01:24:00.641034 | orchestrator | Monday 13 April 2026 01:23:19 +0000 (0:00:00.719) 0:00:02.270 ********** 2026-04-13 01:24:00.641054 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:24:00.641074 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:24:00.641095 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:24:00.641115 | orchestrator | 2026-04-13 01:24:00.641136 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-13 01:24:00.641157 | orchestrator | Monday 13 April 2026 01:23:22 +0000 (0:00:03.633) 0:00:05.904 ********** 2026-04-13 01:24:00.641176 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:24:00.641192 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:24:00.641220 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:24:00.641232 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-13 01:24:00.641243 | orchestrator | 2026-04-13 01:24:00.641253 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-13 01:24:00.641264 | orchestrator | skipping: no hosts matched 2026-04-13 01:24:00.641275 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-13 01:24:00.641286 | orchestrator | 2026-04-13 01:24:00.641297 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-13 01:24:00.641308 | orchestrator | skipping: no hosts matched 2026-04-13 01:24:00.641319 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-13 01:24:00.641330 | orchestrator | mariadb_bootstrap_restart 2026-04-13 01:24:00.641340 | orchestrator | 2026-04-13 01:24:00.641351 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-13 01:24:00.641361 | orchestrator | skipping: no hosts matched 2026-04-13 01:24:00.641372 | orchestrator | 2026-04-13 01:24:00.641383 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-13 01:24:00.641394 | orchestrator | 2026-04-13 01:24:00.641404 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-13 01:24:00.641415 | orchestrator | Monday 13 April 2026 01:23:59 +0000 (0:00:36.799) 0:00:42.703 ********** 2026-04-13 01:24:00.641426 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:24:00.641437 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:24:00.641447 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:24:00.641458 | orchestrator | 2026-04-13 01:24:00.641468 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-13 01:24:00.641479 | orchestrator | Monday 13 April 2026 01:23:59 +0000 (0:00:00.299) 0:00:43.003 ********** 2026-04-13 01:24:00.641490 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:24:00.641501 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:24:00.641512 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:24:00.641522 | orchestrator | 2026-04-13 01:24:00.641533 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:24:00.641557 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:24:00.641569 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 01:24:00.641580 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 01:24:00.641591 | orchestrator | 2026-04-13 01:24:00.641601 | orchestrator | 2026-04-13 01:24:00.641612 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:24:00.641623 | orchestrator | Monday 13 April 2026 01:24:00 +0000 (0:00:00.269) 0:00:43.272 ********** 2026-04-13 01:24:00.641634 | orchestrator | =============================================================================== 2026-04-13 01:24:00.641645 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 36.80s 2026-04-13 01:24:00.641676 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.63s 2026-04-13 01:24:00.641688 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.72s 2026-04-13 01:24:00.641699 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-04-13 01:24:00.641710 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.48s 2026-04-13 01:24:00.641721 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-13 01:24:00.641732 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-04-13 01:24:00.641743 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.27s 2026-04-13 01:24:00.838762 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-13 01:24:00.846060 | orchestrator | + set -e 2026-04-13 01:24:00.846134 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 01:24:00.846148 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 01:24:00.846159 | orchestrator | ++ INTERACTIVE=false 2026-04-13 01:24:00.846167 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 01:24:00.846176 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 01:24:00.846186 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-13 01:24:00.847451 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-13 01:24:00.853661 | orchestrator | 2026-04-13 01:24:00.853725 | orchestrator | # OpenStack endpoints 2026-04-13 01:24:00.853739 | orchestrator | 2026-04-13 01:24:00.853751 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 01:24:00.853763 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 01:24:00.853774 | orchestrator | + export OS_CLOUD=admin 2026-04-13 01:24:00.853824 | orchestrator | + OS_CLOUD=admin 2026-04-13 01:24:00.853842 | orchestrator | + echo 2026-04-13 01:24:00.853860 | orchestrator | + echo '# OpenStack endpoints' 2026-04-13 01:24:00.853878 | orchestrator | + echo 2026-04-13 01:24:00.853892 | orchestrator | + openstack endpoint list 2026-04-13 01:24:04.256609 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-13 01:24:04.256733 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-13 01:24:04.256749 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-13 01:24:04.256761 | orchestrator | | 0da4e9d88d7144549a54146d898963b3 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-13 01:24:04.256865 | orchestrator | | 118e3669576f48d7aa30561b4f069571 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-13 01:24:04.256884 | orchestrator | | 1a64203b476d418683c1d09f2a9f7242 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-13 01:24:04.256916 | orchestrator | | 2b41895208ca4a9abed45044a6fc76d3 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-13 01:24:04.256927 | orchestrator | | 36c3b695216542fca62fa9dff0c9b053 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-13 01:24:04.256939 | orchestrator | | 466033dad32a4d60ad7d244f69a6a566 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-13 01:24:04.256950 | orchestrator | | 5ff1cdcdbf444e7ab2235e27cc0a2a54 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-13 01:24:04.256961 | orchestrator | | 6112524b302c4608ae844985c1ccefec | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-13 01:24:04.256972 | orchestrator | | 637c777844bc49c9b1a63ac71cb88b6e | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-13 01:24:04.256983 | orchestrator | | 6d458306fa5a4224b344ae15eaf425fa | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-13 01:24:04.256994 | orchestrator | | 6d6b4388a7d5455a9f89fcf76671d991 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-13 01:24:04.257005 | orchestrator | | 6d7ae3ad3049461bb12d2905f695053e | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-13 01:24:04.257016 | orchestrator | | 6f698294140f402f9be4e83a29797a49 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-13 01:24:04.257027 | orchestrator | | a137788909194d0589ba080bfb18e867 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-13 01:24:04.257038 | orchestrator | | a6db0c59dafa4d189a9cbc66952f1954 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-13 01:24:04.257061 | orchestrator | | af7f9a0eb2ac42a9bdbb5182418f0f04 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-13 01:24:04.257072 | orchestrator | | c186ce47656844a7b011d920581704db | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-13 01:24:04.257083 | orchestrator | | cd493306d5d9437aa93e79353049f2b4 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-13 01:24:04.257094 | orchestrator | | d2a5d2038f9f467c933d382c26cd36cc | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-13 01:24:04.257105 | orchestrator | | e4a6530d3d8547e8a856383b8ed00771 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-13 01:24:04.257135 | orchestrator | | efde38d84d9849af9f1d6a72dd0db5d5 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-13 01:24:04.257151 | orchestrator | | f97b5ae08e46453aba1b1bdcf659e4e1 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-13 01:24:04.257164 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-13 01:24:04.518356 | orchestrator | 2026-04-13 01:24:04.518447 | orchestrator | # Cinder 2026-04-13 01:24:04.518460 | orchestrator | 2026-04-13 01:24:04.518471 | orchestrator | + echo 2026-04-13 01:24:04.518481 | orchestrator | + echo '# Cinder' 2026-04-13 01:24:04.518491 | orchestrator | + echo 2026-04-13 01:24:04.518501 | orchestrator | + openstack volume service list 2026-04-13 01:24:07.290281 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-13 01:24:07.290391 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-13 01:24:07.290407 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-13 01:24:07.290419 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-13T01:24:00.000000 | 2026-04-13 01:24:07.290431 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-13T01:23:59.000000 | 2026-04-13 01:24:07.290442 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-13T01:24:00.000000 | 2026-04-13 01:24:07.290454 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-13T01:23:59.000000 | 2026-04-13 01:24:07.290465 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-13T01:23:58.000000 | 2026-04-13 01:24:07.290476 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-13T01:24:00.000000 | 2026-04-13 01:24:07.290487 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-13T01:24:04.000000 | 2026-04-13 01:24:07.290498 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-13T01:23:58.000000 | 2026-04-13 01:24:07.290509 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-13T01:23:58.000000 | 2026-04-13 01:24:07.290520 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-13 01:24:07.651274 | orchestrator | 2026-04-13 01:24:07.651338 | orchestrator | # Neutron 2026-04-13 01:24:07.651345 | orchestrator | 2026-04-13 01:24:07.651349 | orchestrator | + echo 2026-04-13 01:24:07.651354 | orchestrator | + echo '# Neutron' 2026-04-13 01:24:07.651359 | orchestrator | + echo 2026-04-13 01:24:07.651363 | orchestrator | + openstack network agent list 2026-04-13 01:24:10.469902 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-13 01:24:10.470006 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-13 01:24:10.470053 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-13 01:24:10.470061 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-13 01:24:10.470067 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-13 01:24:10.470073 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-13 01:24:10.470079 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-13 01:24:10.470085 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-13 01:24:10.470091 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-13 01:24:10.470117 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-13 01:24:10.470123 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-13 01:24:10.470128 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-13 01:24:10.470134 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-13 01:24:10.769491 | orchestrator | + openstack network service provider list 2026-04-13 01:24:13.313119 | orchestrator | +---------------+------+---------+ 2026-04-13 01:24:13.313239 | orchestrator | | Service Type | Name | Default | 2026-04-13 01:24:13.313256 | orchestrator | +---------------+------+---------+ 2026-04-13 01:24:13.313267 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-13 01:24:13.313278 | orchestrator | +---------------+------+---------+ 2026-04-13 01:24:13.613480 | orchestrator | 2026-04-13 01:24:13.613579 | orchestrator | # Nova 2026-04-13 01:24:13.613596 | orchestrator | 2026-04-13 01:24:13.613608 | orchestrator | + echo 2026-04-13 01:24:13.613620 | orchestrator | + echo '# Nova' 2026-04-13 01:24:13.613632 | orchestrator | + echo 2026-04-13 01:24:13.613644 | orchestrator | + openstack compute service list 2026-04-13 01:24:17.079764 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-13 01:24:17.079916 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-13 01:24:17.079933 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-13 01:24:17.079941 | orchestrator | | 25512904-8317-439d-8c86-7d8a727246d9 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-13T01:24:12.000000 | 2026-04-13 01:24:17.079965 | orchestrator | | d387a9ae-e359-4a9c-aabd-117d493759d2 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-13T01:24:14.000000 | 2026-04-13 01:24:17.079973 | orchestrator | | 4ca92c74-a05c-403c-8ebc-bbe40de5a923 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-13T01:24:08.000000 | 2026-04-13 01:24:17.079981 | orchestrator | | b76feb40-281a-45da-88d3-0a6747ec001d | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-13T01:24:15.000000 | 2026-04-13 01:24:17.079989 | orchestrator | | 05f58de9-1ee8-46dd-94f1-e3fb54cd597b | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-13T01:24:07.000000 | 2026-04-13 01:24:17.079996 | orchestrator | | 8790b7bc-b7fe-474b-8c3a-d753f3afdb42 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-13T01:24:08.000000 | 2026-04-13 01:24:17.080003 | orchestrator | | de150d54-25b9-477c-aff0-c854e1d129d9 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-13T01:24:12.000000 | 2026-04-13 01:24:17.080011 | orchestrator | | eeb52a18-ce74-47b5-82c2-b81f38624e19 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-13T01:24:14.000000 | 2026-04-13 01:24:17.080019 | orchestrator | | 62cba570-89c2-4a37-98b2-8e15fb6c94e1 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-13T01:24:14.000000 | 2026-04-13 01:24:17.080027 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-13 01:24:17.395136 | orchestrator | + openstack hypervisor list 2026-04-13 01:24:20.649806 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-13 01:24:20.649965 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-13 01:24:20.649981 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-13 01:24:20.649993 | orchestrator | | 3a086fb0-796a-43f1-b677-94d7a5ba5965 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-13 01:24:20.650095 | orchestrator | | 1bbbf9a4-a2f0-461c-b8e4-7090bf42fb7f | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-13 01:24:20.650111 | orchestrator | | 77c04f04-fbe2-4f5d-bdc3-934c92eee260 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-13 01:24:20.650122 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-13 01:24:20.960771 | orchestrator | 2026-04-13 01:24:20.960909 | orchestrator | # Run OpenStack test play 2026-04-13 01:24:20.960927 | orchestrator | 2026-04-13 01:24:20.960937 | orchestrator | + echo 2026-04-13 01:24:20.960948 | orchestrator | + echo '# Run OpenStack test play' 2026-04-13 01:24:20.960959 | orchestrator | + echo 2026-04-13 01:24:20.960969 | orchestrator | + osism apply --environment openstack test 2026-04-13 01:24:22.384921 | orchestrator | 2026-04-13 01:24:22 | INFO  | Trying to run play test in environment openstack 2026-04-13 01:24:32.488669 | orchestrator | 2026-04-13 01:24:32 | INFO  | Prepare task for execution of test. 2026-04-13 01:24:32.576420 | orchestrator | 2026-04-13 01:24:32 | INFO  | Task 6ece4d21-0a1b-45b0-af00-f418a28d6573 (test) was prepared for execution. 2026-04-13 01:24:32.576482 | orchestrator | 2026-04-13 01:24:32 | INFO  | It takes a moment until task 6ece4d21-0a1b-45b0-af00-f418a28d6573 (test) has been started and output is visible here. 2026-04-13 01:27:53.649219 | orchestrator | 2026-04-13 01:27:53.649313 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-13 01:27:53.649375 | orchestrator | 2026-04-13 01:27:53.649395 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-13 01:27:53.649411 | orchestrator | Monday 13 April 2026 01:24:35 +0000 (0:00:00.113) 0:00:00.113 ********** 2026-04-13 01:27:53.649424 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649440 | orchestrator | 2026-04-13 01:27:53.649456 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-13 01:27:53.649470 | orchestrator | Monday 13 April 2026 01:24:39 +0000 (0:00:03.944) 0:00:04.057 ********** 2026-04-13 01:27:53.649485 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649500 | orchestrator | 2026-04-13 01:27:53.649514 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-13 01:27:53.649530 | orchestrator | Monday 13 April 2026 01:24:44 +0000 (0:00:04.625) 0:00:08.683 ********** 2026-04-13 01:27:53.649544 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649559 | orchestrator | 2026-04-13 01:27:53.649573 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-13 01:27:53.649588 | orchestrator | Monday 13 April 2026 01:24:51 +0000 (0:00:06.872) 0:00:15.556 ********** 2026-04-13 01:27:53.649603 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649617 | orchestrator | 2026-04-13 01:27:53.649632 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-13 01:27:53.649648 | orchestrator | Monday 13 April 2026 01:24:55 +0000 (0:00:04.434) 0:00:19.990 ********** 2026-04-13 01:27:53.649662 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649676 | orchestrator | 2026-04-13 01:27:53.649691 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-13 01:27:53.649705 | orchestrator | Monday 13 April 2026 01:25:00 +0000 (0:00:04.455) 0:00:24.446 ********** 2026-04-13 01:27:53.649719 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-13 01:27:53.649732 | orchestrator | changed: [localhost] => (item=member) 2026-04-13 01:27:53.649746 | orchestrator | changed: [localhost] => (item=creator) 2026-04-13 01:27:53.649759 | orchestrator | 2026-04-13 01:27:53.649774 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-13 01:27:53.649789 | orchestrator | Monday 13 April 2026 01:25:12 +0000 (0:00:12.678) 0:00:37.125 ********** 2026-04-13 01:27:53.649805 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649818 | orchestrator | 2026-04-13 01:27:53.649849 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-13 01:27:53.649866 | orchestrator | Monday 13 April 2026 01:25:17 +0000 (0:00:04.693) 0:00:41.818 ********** 2026-04-13 01:27:53.649903 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649920 | orchestrator | 2026-04-13 01:27:53.649935 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-13 01:27:53.649951 | orchestrator | Monday 13 April 2026 01:25:22 +0000 (0:00:05.329) 0:00:47.147 ********** 2026-04-13 01:27:53.649967 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.649982 | orchestrator | 2026-04-13 01:27:53.649995 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-13 01:27:53.650006 | orchestrator | Monday 13 April 2026 01:25:27 +0000 (0:00:04.759) 0:00:51.907 ********** 2026-04-13 01:27:53.650065 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.650076 | orchestrator | 2026-04-13 01:27:53.650086 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-13 01:27:53.650096 | orchestrator | Monday 13 April 2026 01:25:31 +0000 (0:00:04.150) 0:00:56.057 ********** 2026-04-13 01:27:53.650106 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.650116 | orchestrator | 2026-04-13 01:27:53.650127 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-13 01:27:53.650136 | orchestrator | Monday 13 April 2026 01:25:36 +0000 (0:00:04.597) 0:01:00.655 ********** 2026-04-13 01:27:53.650145 | orchestrator | changed: [localhost] 2026-04-13 01:27:53.650154 | orchestrator | 2026-04-13 01:27:53.650162 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-13 01:27:53.650171 | orchestrator | Monday 13 April 2026 01:25:40 +0000 (0:00:04.380) 0:01:05.035 ********** 2026-04-13 01:27:53.650180 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-13 01:27:53.650188 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-13 01:27:53.650197 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-13 01:27:53.650206 | orchestrator | 2026-04-13 01:27:53.650214 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-13 01:27:53.650223 | orchestrator | Monday 13 April 2026 01:25:55 +0000 (0:00:14.835) 0:01:19.870 ********** 2026-04-13 01:27:53.650232 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-13 01:27:53.650241 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-13 01:27:53.650250 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-13 01:27:53.650258 | orchestrator | 2026-04-13 01:27:53.650267 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-13 01:27:53.650276 | orchestrator | Monday 13 April 2026 01:26:12 +0000 (0:00:17.114) 0:01:36.984 ********** 2026-04-13 01:27:53.650285 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-13 01:27:53.650294 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-13 01:27:53.650309 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-13 01:27:53.650330 | orchestrator | 2026-04-13 01:27:53.650371 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-13 01:27:53.650386 | orchestrator | 2026-04-13 01:27:53.650400 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-13 01:27:53.650434 | orchestrator | Monday 13 April 2026 01:26:45 +0000 (0:00:32.507) 0:02:09.492 ********** 2026-04-13 01:27:53.650451 | orchestrator | ok: [localhost] 2026-04-13 01:27:53.650466 | orchestrator | 2026-04-13 01:27:53.650480 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-13 01:27:53.650489 | orchestrator | Monday 13 April 2026 01:26:49 +0000 (0:00:03.911) 0:02:13.404 ********** 2026-04-13 01:27:53.650498 | orchestrator | skipping: [localhost] 2026-04-13 01:27:53.650507 | orchestrator | 2026-04-13 01:27:53.650515 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-13 01:27:53.650524 | orchestrator | Monday 13 April 2026 01:26:49 +0000 (0:00:00.059) 0:02:13.463 ********** 2026-04-13 01:27:53.650591 | orchestrator | skipping: [localhost] 2026-04-13 01:27:53.650602 | orchestrator | 2026-04-13 01:27:53.650613 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-13 01:27:53.650627 | orchestrator | Monday 13 April 2026 01:26:49 +0000 (0:00:00.050) 0:02:13.514 ********** 2026-04-13 01:27:53.650641 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-13 01:27:53.650656 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-13 01:27:53.650671 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-13 01:27:53.650684 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-13 01:27:53.650693 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-13 01:27:53.650702 | orchestrator | skipping: [localhost] 2026-04-13 01:27:53.650711 | orchestrator | 2026-04-13 01:27:53.650719 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-13 01:27:53.650728 | orchestrator | Monday 13 April 2026 01:26:49 +0000 (0:00:00.164) 0:02:13.678 ********** 2026-04-13 01:27:53.650737 | orchestrator | skipping: [localhost] 2026-04-13 01:27:53.650745 | orchestrator | 2026-04-13 01:27:53.650754 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-13 01:27:53.650762 | orchestrator | Monday 13 April 2026 01:26:49 +0000 (0:00:00.141) 0:02:13.819 ********** 2026-04-13 01:27:53.650771 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-13 01:27:53.650779 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-13 01:27:53.650795 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-13 01:27:53.650804 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-13 01:27:53.650812 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-13 01:27:53.650821 | orchestrator | 2026-04-13 01:27:53.650829 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-13 01:27:53.650838 | orchestrator | Monday 13 April 2026 01:26:54 +0000 (0:00:04.843) 0:02:18.663 ********** 2026-04-13 01:27:53.650846 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-13 01:27:53.650856 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-13 01:27:53.650864 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-13 01:27:53.650873 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-13 01:27:53.650882 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-13 01:27:53.650898 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j173266859824.2842', 'results_file': '/ansible/.ansible_async/j173266859824.2842', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:27:53.650922 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j62859581457.2867', 'results_file': '/ansible/.ansible_async/j62859581457.2867', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:27:53.650940 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j302777047750.2892', 'results_file': '/ansible/.ansible_async/j302777047750.2892', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:27:53.650955 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j224238289502.2917', 'results_file': '/ansible/.ansible_async/j224238289502.2917', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:27:53.650982 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j574057149942.2942', 'results_file': '/ansible/.ansible_async/j574057149942.2942', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:27:53.650996 | orchestrator | 2026-04-13 01:27:53.651010 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-13 01:27:53.651025 | orchestrator | Monday 13 April 2026 01:27:52 +0000 (0:00:58.325) 0:03:16.989 ********** 2026-04-13 01:27:53.651050 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-13 01:29:09.429130 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-13 01:29:09.429215 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-13 01:29:09.429223 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-13 01:29:09.429229 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-13 01:29:09.429234 | orchestrator | 2026-04-13 01:29:09.429241 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-13 01:29:09.429246 | orchestrator | Monday 13 April 2026 01:27:57 +0000 (0:00:04.458) 0:03:21.448 ********** 2026-04-13 01:29:09.429252 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-13 01:29:09.429260 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j479207038014.3053', 'results_file': '/ansible/.ansible_async/j479207038014.3053', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429268 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j875380907780.3078', 'results_file': '/ansible/.ansible_async/j875380907780.3078', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429274 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j405315247244.3103', 'results_file': '/ansible/.ansible_async/j405315247244.3103', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429290 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j342144442565.3128', 'results_file': '/ansible/.ansible_async/j342144442565.3128', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429296 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j175922629348.3153', 'results_file': '/ansible/.ansible_async/j175922629348.3153', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429301 | orchestrator | 2026-04-13 01:29:09.429307 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-13 01:29:09.429312 | orchestrator | Monday 13 April 2026 01:28:07 +0000 (0:00:09.872) 0:03:31.320 ********** 2026-04-13 01:29:09.429317 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-13 01:29:09.429323 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-13 01:29:09.429328 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-13 01:29:09.429333 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-13 01:29:09.429338 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-13 01:29:09.429343 | orchestrator | 2026-04-13 01:29:09.429348 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-13 01:29:09.429369 | orchestrator | Monday 13 April 2026 01:28:12 +0000 (0:00:04.974) 0:03:36.294 ********** 2026-04-13 01:29:09.429374 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-13 01:29:09.429380 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j961845058349.3222', 'results_file': '/ansible/.ansible_async/j961845058349.3222', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429385 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j878576097889.3247', 'results_file': '/ansible/.ansible_async/j878576097889.3247', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429391 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j344494818722.3273', 'results_file': '/ansible/.ansible_async/j344494818722.3273', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429396 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j939347462163.3299', 'results_file': '/ansible/.ansible_async/j939347462163.3299', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429411 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j984265752757.3325', 'results_file': '/ansible/.ansible_async/j984265752757.3325', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-13 01:29:09.429417 | orchestrator | 2026-04-13 01:29:09.429422 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-13 01:29:09.429427 | orchestrator | Monday 13 April 2026 01:28:21 +0000 (0:00:09.765) 0:03:46.059 ********** 2026-04-13 01:29:09.429433 | orchestrator | changed: [localhost] 2026-04-13 01:29:09.429439 | orchestrator | 2026-04-13 01:29:09.429445 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-13 01:29:09.429450 | orchestrator | Monday 13 April 2026 01:28:28 +0000 (0:00:06.806) 0:03:52.866 ********** 2026-04-13 01:29:09.429455 | orchestrator | changed: [localhost] 2026-04-13 01:29:09.429460 | orchestrator | 2026-04-13 01:29:09.429526 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-13 01:29:09.429531 | orchestrator | Monday 13 April 2026 01:28:42 +0000 (0:00:14.053) 0:04:06.920 ********** 2026-04-13 01:29:09.429537 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-13 01:29:09.429543 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-13 01:29:09.429548 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-13 01:29:09.429553 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-13 01:29:09.429558 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-13 01:29:09.429563 | orchestrator | 2026-04-13 01:29:09.429568 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-13 01:29:09.429574 | orchestrator | Monday 13 April 2026 01:29:09 +0000 (0:00:26.405) 0:04:33.325 ********** 2026-04-13 01:29:09.429579 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-13 01:29:09.429584 | orchestrator |  "msg": "test: 192.168.112.171" 2026-04-13 01:29:09.429589 | orchestrator | } 2026-04-13 01:29:09.429595 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-13 01:29:09.429601 | orchestrator |  "msg": "test-1: 192.168.112.128" 2026-04-13 01:29:09.429606 | orchestrator | } 2026-04-13 01:29:09.429611 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-13 01:29:09.429616 | orchestrator |  "msg": "test-2: 192.168.112.104" 2026-04-13 01:29:09.429621 | orchestrator | } 2026-04-13 01:29:09.429627 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-13 01:29:09.429639 | orchestrator |  "msg": "test-3: 192.168.112.157" 2026-04-13 01:29:09.429644 | orchestrator | } 2026-04-13 01:29:09.429655 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-13 01:29:09.429660 | orchestrator |  "msg": "test-4: 192.168.112.179" 2026-04-13 01:29:09.429665 | orchestrator | } 2026-04-13 01:29:09.429671 | orchestrator | 2026-04-13 01:29:09.429676 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:29:09.429682 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 01:29:09.429688 | orchestrator | 2026-04-13 01:29:09.429693 | orchestrator | 2026-04-13 01:29:09.429700 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:29:09.429706 | orchestrator | Monday 13 April 2026 01:29:09 +0000 (0:00:00.130) 0:04:33.456 ********** 2026-04-13 01:29:09.429712 | orchestrator | =============================================================================== 2026-04-13 01:29:09.429718 | orchestrator | Wait for instance creation to complete --------------------------------- 58.33s 2026-04-13 01:29:09.429725 | orchestrator | Create test routers ---------------------------------------------------- 32.51s 2026-04-13 01:29:09.429731 | orchestrator | Create floating ip addresses ------------------------------------------- 26.41s 2026-04-13 01:29:09.429736 | orchestrator | Create test subnets ---------------------------------------------------- 17.11s 2026-04-13 01:29:09.429742 | orchestrator | Create test networks --------------------------------------------------- 14.84s 2026-04-13 01:29:09.429748 | orchestrator | Attach test volume ----------------------------------------------------- 14.05s 2026-04-13 01:29:09.429755 | orchestrator | Add member roles to user test ------------------------------------------ 12.68s 2026-04-13 01:29:09.429765 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.87s 2026-04-13 01:29:09.429773 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.77s 2026-04-13 01:29:09.429781 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.87s 2026-04-13 01:29:09.429789 | orchestrator | Create test volume ------------------------------------------------------ 6.81s 2026-04-13 01:29:09.429797 | orchestrator | Create ssh security group ----------------------------------------------- 5.33s 2026-04-13 01:29:09.429805 | orchestrator | Add tag to instances ---------------------------------------------------- 4.97s 2026-04-13 01:29:09.429813 | orchestrator | Create test instances --------------------------------------------------- 4.84s 2026-04-13 01:29:09.429821 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.76s 2026-04-13 01:29:09.429829 | orchestrator | Create test server group ------------------------------------------------ 4.69s 2026-04-13 01:29:09.429836 | orchestrator | Create test-admin user -------------------------------------------------- 4.63s 2026-04-13 01:29:09.429844 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.60s 2026-04-13 01:29:09.429852 | orchestrator | Add metadata to instances ----------------------------------------------- 4.46s 2026-04-13 01:29:09.429861 | orchestrator | Create test user -------------------------------------------------------- 4.46s 2026-04-13 01:29:09.642440 | orchestrator | + server_list 2026-04-13 01:29:09.642586 | orchestrator | + openstack --os-cloud test server list 2026-04-13 01:29:13.356003 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-13 01:29:13.356137 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-13 01:29:13.356164 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-13 01:29:13.356182 | orchestrator | | de531e7b-bf01-4329-a336-47ad25575537 | test-4 | ACTIVE | test-3=192.168.112.179, 192.168.202.160 | N/A (booted from volume) | SCS-1L-1 | 2026-04-13 01:29:13.356216 | orchestrator | | b09b4282-fa9c-4d06-8825-54dbfb06279d | test-3 | ACTIVE | test-2=192.168.112.157, 192.168.201.111 | N/A (booted from volume) | SCS-1L-1 | 2026-04-13 01:29:13.356272 | orchestrator | | 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 | test-1 | ACTIVE | test-1=192.168.112.128, 192.168.200.158 | N/A (booted from volume) | SCS-1L-1 | 2026-04-13 01:29:13.356294 | orchestrator | | c223eb87-e5cc-427d-a86b-cef45ba92bbe | test-2 | ACTIVE | test-2=192.168.112.104, 192.168.201.29 | N/A (booted from volume) | SCS-1L-1 | 2026-04-13 01:29:13.356313 | orchestrator | | 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 | test | ACTIVE | test-1=192.168.112.171, 192.168.200.83 | N/A (booted from volume) | SCS-1L-1 | 2026-04-13 01:29:13.356330 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-13 01:29:13.664809 | orchestrator | + openstack --os-cloud test server show test 2026-04-13 01:29:16.958384 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:16.958564 | orchestrator | | Field | Value | 2026-04-13 01:29:16.958596 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:16.958611 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-13 01:29:16.958622 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-13 01:29:16.958634 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-13 01:29:16.958645 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-13 01:29:16.958657 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-13 01:29:16.958688 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-13 01:29:16.958720 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-13 01:29:16.958738 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-13 01:29:16.958750 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-13 01:29:16.958761 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-13 01:29:16.958772 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-13 01:29:16.958783 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-13 01:29:16.958795 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-13 01:29:16.958806 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-13 01:29:16.958828 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-13 01:29:16.958842 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-13T01:27:27.000000 | 2026-04-13 01:29:16.958862 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-13 01:29:16.958885 | orchestrator | | accessIPv4 | | 2026-04-13 01:29:16.958898 | orchestrator | | accessIPv6 | | 2026-04-13 01:29:16.958911 | orchestrator | | addresses | test-1=192.168.112.171, 192.168.200.83 | 2026-04-13 01:29:16.958924 | orchestrator | | config_drive | | 2026-04-13 01:29:16.958937 | orchestrator | | created | 2026-04-13T01:26:59Z | 2026-04-13 01:29:16.958949 | orchestrator | | description | None | 2026-04-13 01:29:16.958970 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-13 01:29:16.958983 | orchestrator | | hostId | b22928985c73f15a75f80b0e5fb7408605078ac3509834c8b0f5bec8 | 2026-04-13 01:29:16.958996 | orchestrator | | host_status | None | 2026-04-13 01:29:16.959016 | orchestrator | | id | 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 | 2026-04-13 01:29:16.959035 | orchestrator | | image | N/A (booted from volume) | 2026-04-13 01:29:16.959049 | orchestrator | | key_name | test | 2026-04-13 01:29:16.959062 | orchestrator | | locked | False | 2026-04-13 01:29:16.959074 | orchestrator | | locked_reason | None | 2026-04-13 01:29:16.959087 | orchestrator | | name | test | 2026-04-13 01:29:16.959101 | orchestrator | | pinned_availability_zone | None | 2026-04-13 01:29:16.959120 | orchestrator | | progress | 0 | 2026-04-13 01:29:16.959133 | orchestrator | | project_id | 42fc58b67ab8453d98ff01325d1d9500 | 2026-04-13 01:29:16.959146 | orchestrator | | properties | hostname='test' | 2026-04-13 01:29:16.959165 | orchestrator | | security_groups | name='icmp' | 2026-04-13 01:29:16.959178 | orchestrator | | | name='ssh' | 2026-04-13 01:29:16.959191 | orchestrator | | server_groups | None | 2026-04-13 01:29:16.959202 | orchestrator | | status | ACTIVE | 2026-04-13 01:29:16.959221 | orchestrator | | tags | test | 2026-04-13 01:29:16.959233 | orchestrator | | trusted_image_certificates | None | 2026-04-13 01:29:16.959257 | orchestrator | | updated | 2026-04-13T01:27:58Z | 2026-04-13 01:29:16.959269 | orchestrator | | user_id | 59af0bc9d8074dffbd8886d7b719a0cb | 2026-04-13 01:29:16.959280 | orchestrator | | volumes_attached | delete_on_termination='True', id='60ea83b1-a7cc-4126-973d-838bb8da1db2' | 2026-04-13 01:29:16.959291 | orchestrator | | | delete_on_termination='False', id='9a4d81f3-630f-41a4-bb6d-ada703e7bda7' | 2026-04-13 01:29:16.961754 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:17.277226 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-13 01:29:20.385237 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:20.385333 | orchestrator | | Field | Value | 2026-04-13 01:29:20.385348 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:20.385359 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-13 01:29:20.385387 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-13 01:29:20.385398 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-13 01:29:20.385408 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-13 01:29:20.385419 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-13 01:29:20.385429 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-13 01:29:20.385456 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-13 01:29:20.385472 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-13 01:29:20.385512 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-13 01:29:20.385529 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-13 01:29:20.385558 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-13 01:29:20.385575 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-13 01:29:20.385667 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-13 01:29:20.385680 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-13 01:29:20.385691 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-13 01:29:20.385701 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-13T01:27:27.000000 | 2026-04-13 01:29:20.385723 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-13 01:29:20.385741 | orchestrator | | accessIPv4 | | 2026-04-13 01:29:20.385753 | orchestrator | | accessIPv6 | | 2026-04-13 01:29:20.385772 | orchestrator | | addresses | test-1=192.168.112.128, 192.168.200.158 | 2026-04-13 01:29:20.385784 | orchestrator | | config_drive | | 2026-04-13 01:29:20.385795 | orchestrator | | created | 2026-04-13T01:27:00Z | 2026-04-13 01:29:20.385808 | orchestrator | | description | None | 2026-04-13 01:29:20.385819 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-13 01:29:20.385831 | orchestrator | | hostId | a3d9de2a138b22739206e893322f99111ac03860fc8a595db5a6a789 | 2026-04-13 01:29:20.385842 | orchestrator | | host_status | None | 2026-04-13 01:29:20.385861 | orchestrator | | id | 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 | 2026-04-13 01:29:20.385878 | orchestrator | | image | N/A (booted from volume) | 2026-04-13 01:29:20.385890 | orchestrator | | key_name | test | 2026-04-13 01:29:20.385909 | orchestrator | | locked | False | 2026-04-13 01:29:20.385921 | orchestrator | | locked_reason | None | 2026-04-13 01:29:20.385932 | orchestrator | | name | test-1 | 2026-04-13 01:29:20.385944 | orchestrator | | pinned_availability_zone | None | 2026-04-13 01:29:20.385957 | orchestrator | | progress | 0 | 2026-04-13 01:29:20.386000 | orchestrator | | project_id | 42fc58b67ab8453d98ff01325d1d9500 | 2026-04-13 01:29:20.386011 | orchestrator | | properties | hostname='test-1' | 2026-04-13 01:29:20.386103 | orchestrator | | security_groups | name='icmp' | 2026-04-13 01:29:20.386120 | orchestrator | | | name='ssh' | 2026-04-13 01:29:20.386139 | orchestrator | | server_groups | None | 2026-04-13 01:29:20.386149 | orchestrator | | status | ACTIVE | 2026-04-13 01:29:20.386159 | orchestrator | | tags | test | 2026-04-13 01:29:20.386169 | orchestrator | | trusted_image_certificates | None | 2026-04-13 01:29:20.386179 | orchestrator | | updated | 2026-04-13T01:27:59Z | 2026-04-13 01:29:20.386189 | orchestrator | | user_id | 59af0bc9d8074dffbd8886d7b719a0cb | 2026-04-13 01:29:20.386199 | orchestrator | | volumes_attached | delete_on_termination='True', id='03e66c99-f7e7-4aec-a5a2-09bcdd3a415a' | 2026-04-13 01:29:20.393379 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:20.859338 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-13 01:29:23.968963 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:23.969072 | orchestrator | | Field | Value | 2026-04-13 01:29:23.969084 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:23.969101 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-13 01:29:23.969108 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-13 01:29:23.969114 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-13 01:29:23.969120 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-13 01:29:23.969126 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-13 01:29:23.969132 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-13 01:29:23.969153 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-13 01:29:23.969171 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-13 01:29:23.969178 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-13 01:29:23.969184 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-13 01:29:23.969191 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-13 01:29:23.969198 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-13 01:29:23.969204 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-13 01:29:23.969211 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-13 01:29:23.969218 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-13 01:29:23.969224 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-13T01:27:26.000000 | 2026-04-13 01:29:23.969240 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-13 01:29:23.969249 | orchestrator | | accessIPv4 | | 2026-04-13 01:29:23.969256 | orchestrator | | accessIPv6 | | 2026-04-13 01:29:23.969263 | orchestrator | | addresses | test-2=192.168.112.104, 192.168.201.29 | 2026-04-13 01:29:23.969269 | orchestrator | | config_drive | | 2026-04-13 01:29:23.969276 | orchestrator | | created | 2026-04-13T01:27:00Z | 2026-04-13 01:29:23.969283 | orchestrator | | description | None | 2026-04-13 01:29:23.969289 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-13 01:29:23.969296 | orchestrator | | hostId | b22928985c73f15a75f80b0e5fb7408605078ac3509834c8b0f5bec8 | 2026-04-13 01:29:23.969302 | orchestrator | | host_status | None | 2026-04-13 01:29:23.969318 | orchestrator | | id | c223eb87-e5cc-427d-a86b-cef45ba92bbe | 2026-04-13 01:29:23.969327 | orchestrator | | image | N/A (booted from volume) | 2026-04-13 01:29:23.969334 | orchestrator | | key_name | test | 2026-04-13 01:29:23.969340 | orchestrator | | locked | False | 2026-04-13 01:29:23.969347 | orchestrator | | locked_reason | None | 2026-04-13 01:29:23.969354 | orchestrator | | name | test-2 | 2026-04-13 01:29:23.969360 | orchestrator | | pinned_availability_zone | None | 2026-04-13 01:29:23.969367 | orchestrator | | progress | 0 | 2026-04-13 01:29:23.969373 | orchestrator | | project_id | 42fc58b67ab8453d98ff01325d1d9500 | 2026-04-13 01:29:23.969384 | orchestrator | | properties | hostname='test-2' | 2026-04-13 01:29:23.969395 | orchestrator | | security_groups | name='icmp' | 2026-04-13 01:29:23.969404 | orchestrator | | | name='ssh' | 2026-04-13 01:29:23.969411 | orchestrator | | server_groups | None | 2026-04-13 01:29:23.969417 | orchestrator | | status | ACTIVE | 2026-04-13 01:29:23.969424 | orchestrator | | tags | test | 2026-04-13 01:29:23.969431 | orchestrator | | trusted_image_certificates | None | 2026-04-13 01:29:23.969437 | orchestrator | | updated | 2026-04-13T01:27:59Z | 2026-04-13 01:29:23.969444 | orchestrator | | user_id | 59af0bc9d8074dffbd8886d7b719a0cb | 2026-04-13 01:29:23.969454 | orchestrator | | volumes_attached | delete_on_termination='True', id='86a0a0df-281e-400d-b078-536bbe48d7e8' | 2026-04-13 01:29:23.974275 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:24.431895 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-13 01:29:27.651609 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:27.651705 | orchestrator | | Field | Value | 2026-04-13 01:29:27.651716 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:27.651724 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-13 01:29:27.651731 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-13 01:29:27.651737 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-13 01:29:27.651773 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-13 01:29:27.651797 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-13 01:29:27.651804 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-13 01:29:27.651824 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-13 01:29:27.651831 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-13 01:29:27.651859 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-13 01:29:27.651867 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-13 01:29:27.651873 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-13 01:29:27.651880 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-13 01:29:27.651886 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-13 01:29:27.651903 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-13 01:29:27.651910 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-13 01:29:27.651916 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-13T01:27:29.000000 | 2026-04-13 01:29:27.651928 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-13 01:29:27.651935 | orchestrator | | accessIPv4 | | 2026-04-13 01:29:27.652226 | orchestrator | | accessIPv6 | | 2026-04-13 01:29:27.652237 | orchestrator | | addresses | test-2=192.168.112.157, 192.168.201.111 | 2026-04-13 01:29:27.652245 | orchestrator | | config_drive | | 2026-04-13 01:29:27.652253 | orchestrator | | created | 2026-04-13T01:27:02Z | 2026-04-13 01:29:27.652260 | orchestrator | | description | None | 2026-04-13 01:29:27.652273 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-13 01:29:27.652280 | orchestrator | | hostId | a3d9de2a138b22739206e893322f99111ac03860fc8a595db5a6a789 | 2026-04-13 01:29:27.652291 | orchestrator | | host_status | None | 2026-04-13 01:29:27.652304 | orchestrator | | id | b09b4282-fa9c-4d06-8825-54dbfb06279d | 2026-04-13 01:29:27.652312 | orchestrator | | image | N/A (booted from volume) | 2026-04-13 01:29:27.652321 | orchestrator | | key_name | test | 2026-04-13 01:29:27.652332 | orchestrator | | locked | False | 2026-04-13 01:29:27.652343 | orchestrator | | locked_reason | None | 2026-04-13 01:29:27.652353 | orchestrator | | name | test-3 | 2026-04-13 01:29:27.652369 | orchestrator | | pinned_availability_zone | None | 2026-04-13 01:29:27.652381 | orchestrator | | progress | 0 | 2026-04-13 01:29:27.652397 | orchestrator | | project_id | 42fc58b67ab8453d98ff01325d1d9500 | 2026-04-13 01:29:27.652445 | orchestrator | | properties | hostname='test-3' | 2026-04-13 01:29:27.652461 | orchestrator | | security_groups | name='icmp' | 2026-04-13 01:29:27.652469 | orchestrator | | | name='ssh' | 2026-04-13 01:29:27.652477 | orchestrator | | server_groups | None | 2026-04-13 01:29:27.652484 | orchestrator | | status | ACTIVE | 2026-04-13 01:29:27.652507 | orchestrator | | tags | test | 2026-04-13 01:29:27.652520 | orchestrator | | trusted_image_certificates | None | 2026-04-13 01:29:27.652527 | orchestrator | | updated | 2026-04-13T01:28:00Z | 2026-04-13 01:29:27.652533 | orchestrator | | user_id | 59af0bc9d8074dffbd8886d7b719a0cb | 2026-04-13 01:29:27.652543 | orchestrator | | volumes_attached | delete_on_termination='True', id='dd0deac5-2e12-4cb0-8226-edcb8d496ef8' | 2026-04-13 01:29:27.664190 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:27.964901 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-13 01:29:31.103988 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:31.104123 | orchestrator | | Field | Value | 2026-04-13 01:29:31.104153 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:31.104173 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-13 01:29:31.104225 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-13 01:29:31.104283 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-13 01:29:31.104307 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-13 01:29:31.104324 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-13 01:29:31.104361 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-13 01:29:31.104409 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-13 01:29:31.104429 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-13 01:29:31.104448 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-13 01:29:31.104467 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-13 01:29:31.104486 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-13 01:29:31.104565 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-13 01:29:31.104586 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-13 01:29:31.104603 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-13 01:29:31.104622 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-13 01:29:31.104653 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-13T01:27:29.000000 | 2026-04-13 01:29:31.104687 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-13 01:29:31.104708 | orchestrator | | accessIPv4 | | 2026-04-13 01:29:31.104728 | orchestrator | | accessIPv6 | | 2026-04-13 01:29:31.104746 | orchestrator | | addresses | test-3=192.168.112.179, 192.168.202.160 | 2026-04-13 01:29:31.104775 | orchestrator | | config_drive | | 2026-04-13 01:29:31.104787 | orchestrator | | created | 2026-04-13T01:27:03Z | 2026-04-13 01:29:31.104798 | orchestrator | | description | None | 2026-04-13 01:29:31.104809 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-13 01:29:31.104821 | orchestrator | | hostId | b22928985c73f15a75f80b0e5fb7408605078ac3509834c8b0f5bec8 | 2026-04-13 01:29:31.104848 | orchestrator | | host_status | None | 2026-04-13 01:29:31.104869 | orchestrator | | id | de531e7b-bf01-4329-a336-47ad25575537 | 2026-04-13 01:29:31.104880 | orchestrator | | image | N/A (booted from volume) | 2026-04-13 01:29:31.104891 | orchestrator | | key_name | test | 2026-04-13 01:29:31.104909 | orchestrator | | locked | False | 2026-04-13 01:29:31.104920 | orchestrator | | locked_reason | None | 2026-04-13 01:29:31.104931 | orchestrator | | name | test-4 | 2026-04-13 01:29:31.104942 | orchestrator | | pinned_availability_zone | None | 2026-04-13 01:29:31.104953 | orchestrator | | progress | 0 | 2026-04-13 01:29:31.104964 | orchestrator | | project_id | 42fc58b67ab8453d98ff01325d1d9500 | 2026-04-13 01:29:31.104975 | orchestrator | | properties | hostname='test-4' | 2026-04-13 01:29:31.104994 | orchestrator | | security_groups | name='icmp' | 2026-04-13 01:29:31.105006 | orchestrator | | | name='ssh' | 2026-04-13 01:29:31.105023 | orchestrator | | server_groups | None | 2026-04-13 01:29:31.105034 | orchestrator | | status | ACTIVE | 2026-04-13 01:29:31.105045 | orchestrator | | tags | test | 2026-04-13 01:29:31.105056 | orchestrator | | trusted_image_certificates | None | 2026-04-13 01:29:31.105067 | orchestrator | | updated | 2026-04-13T01:28:01Z | 2026-04-13 01:29:31.105078 | orchestrator | | user_id | 59af0bc9d8074dffbd8886d7b719a0cb | 2026-04-13 01:29:31.105162 | orchestrator | | volumes_attached | delete_on_termination='True', id='bc3f0a05-6149-45e2-bf16-c2d389a3c666' | 2026-04-13 01:29:31.110344 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-13 01:29:31.428889 | orchestrator | + server_ping 2026-04-13 01:29:31.430187 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-13 01:29:31.430352 | orchestrator | ++ tr -d '\r' 2026-04-13 01:29:34.505455 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:29:34.505542 | orchestrator | + ping -c3 192.168.112.104 2026-04-13 01:29:34.519787 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-04-13 01:29:34.519895 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=6.78 ms 2026-04-13 01:29:35.517820 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.25 ms 2026-04-13 01:29:36.519972 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.87 ms 2026-04-13 01:29:36.520092 | orchestrator | 2026-04-13 01:29:36.520117 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-04-13 01:29:36.520135 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-13 01:29:36.520152 | orchestrator | rtt min/avg/max/mdev = 1.871/3.634/6.779/2.229 ms 2026-04-13 01:29:36.520170 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:29:36.520187 | orchestrator | + ping -c3 192.168.112.157 2026-04-13 01:29:36.533014 | orchestrator | PING 192.168.112.157 (192.168.112.157) 56(84) bytes of data. 2026-04-13 01:29:36.533111 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=1 ttl=63 time=8.71 ms 2026-04-13 01:29:37.528897 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=2 ttl=63 time=2.64 ms 2026-04-13 01:29:38.530172 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-13 01:29:38.530276 | orchestrator | 2026-04-13 01:29:38.530293 | orchestrator | --- 192.168.112.157 ping statistics --- 2026-04-13 01:29:38.530306 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-13 01:29:38.530318 | orchestrator | rtt min/avg/max/mdev = 1.711/4.354/8.707/3.101 ms 2026-04-13 01:29:38.530977 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:29:38.531044 | orchestrator | + ping -c3 192.168.112.179 2026-04-13 01:29:38.546703 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-13 01:29:38.546787 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=9.46 ms 2026-04-13 01:29:39.542318 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.93 ms 2026-04-13 01:29:40.542857 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.30 ms 2026-04-13 01:29:40.542971 | orchestrator | 2026-04-13 01:29:40.542986 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-13 01:29:40.542999 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:29:40.543011 | orchestrator | rtt min/avg/max/mdev = 2.296/4.895/9.457/3.236 ms 2026-04-13 01:29:40.543388 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:29:40.543412 | orchestrator | + ping -c3 192.168.112.128 2026-04-13 01:29:40.559491 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-13 01:29:40.559616 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=9.75 ms 2026-04-13 01:29:41.554058 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.19 ms 2026-04-13 01:29:42.555775 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.78 ms 2026-04-13 01:29:42.555898 | orchestrator | 2026-04-13 01:29:42.555924 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-13 01:29:42.555946 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-13 01:29:42.555968 | orchestrator | rtt min/avg/max/mdev = 1.777/4.571/9.747/3.663 ms 2026-04-13 01:29:42.556005 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:29:42.556019 | orchestrator | + ping -c3 192.168.112.171 2026-04-13 01:29:42.571353 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-04-13 01:29:42.571446 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=10.5 ms 2026-04-13 01:29:43.565375 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.53 ms 2026-04-13 01:29:44.567769 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.12 ms 2026-04-13 01:29:44.567873 | orchestrator | 2026-04-13 01:29:44.567892 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-04-13 01:29:44.567909 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-13 01:29:44.567925 | orchestrator | rtt min/avg/max/mdev = 2.124/5.049/10.493/3.852 ms 2026-04-13 01:29:44.568582 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-13 01:29:44.568617 | orchestrator | + compute_list 2026-04-13 01:29:44.568655 | orchestrator | + osism manage compute list testbed-node-3 2026-04-13 01:29:46.248233 | orchestrator | 2026-04-13 01:29:46 | ERROR  | Unable to get ansible vault password 2026-04-13 01:29:46.248462 | orchestrator | 2026-04-13 01:29:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:29:46.248482 | orchestrator | 2026-04-13 01:29:46 | ERROR  | Dropping encrypted entries 2026-04-13 01:29:50.330867 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:29:50.330995 | orchestrator | | ID | Name | Status | 2026-04-13 01:29:50.331014 | orchestrator | |--------------------------------------+--------+----------| 2026-04-13 01:29:50.331026 | orchestrator | | b09b4282-fa9c-4d06-8825-54dbfb06279d | test-3 | ACTIVE | 2026-04-13 01:29:50.331038 | orchestrator | | 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 | test-1 | ACTIVE | 2026-04-13 01:29:50.331049 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:29:50.915586 | orchestrator | + osism manage compute list testbed-node-4 2026-04-13 01:29:52.558390 | orchestrator | 2026-04-13 01:29:52 | ERROR  | Unable to get ansible vault password 2026-04-13 01:29:52.558491 | orchestrator | 2026-04-13 01:29:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:29:52.558512 | orchestrator | 2026-04-13 01:29:52 | ERROR  | Dropping encrypted entries 2026-04-13 01:29:53.777955 | orchestrator | +------+--------+----------+ 2026-04-13 01:29:53.778163 | orchestrator | | ID | Name | Status | 2026-04-13 01:29:53.778198 | orchestrator | |------+--------+----------| 2026-04-13 01:29:53.778219 | orchestrator | +------+--------+----------+ 2026-04-13 01:29:54.116266 | orchestrator | + osism manage compute list testbed-node-5 2026-04-13 01:29:55.817833 | orchestrator | 2026-04-13 01:29:55 | ERROR  | Unable to get ansible vault password 2026-04-13 01:29:55.817917 | orchestrator | 2026-04-13 01:29:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:29:55.817929 | orchestrator | 2026-04-13 01:29:55 | ERROR  | Dropping encrypted entries 2026-04-13 01:29:57.458300 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:29:57.458408 | orchestrator | | ID | Name | Status | 2026-04-13 01:29:57.458430 | orchestrator | |--------------------------------------+--------+----------| 2026-04-13 01:29:57.458449 | orchestrator | | de531e7b-bf01-4329-a336-47ad25575537 | test-4 | ACTIVE | 2026-04-13 01:29:57.458467 | orchestrator | | c223eb87-e5cc-427d-a86b-cef45ba92bbe | test-2 | ACTIVE | 2026-04-13 01:29:57.458485 | orchestrator | | 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 | test | ACTIVE | 2026-04-13 01:29:57.458504 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:29:57.801269 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-13 01:29:59.516910 | orchestrator | 2026-04-13 01:29:59 | ERROR  | Unable to get ansible vault password 2026-04-13 01:29:59.518254 | orchestrator | 2026-04-13 01:29:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:29:59.518343 | orchestrator | 2026-04-13 01:29:59 | ERROR  | Dropping encrypted entries 2026-04-13 01:30:00.652112 | orchestrator | 2026-04-13 01:30:00 | INFO  | No migratable instances found on node testbed-node-4 2026-04-13 01:30:01.016602 | orchestrator | + compute_list 2026-04-13 01:30:01.016703 | orchestrator | + osism manage compute list testbed-node-3 2026-04-13 01:30:02.826267 | orchestrator | 2026-04-13 01:30:02 | ERROR  | Unable to get ansible vault password 2026-04-13 01:30:02.826374 | orchestrator | 2026-04-13 01:30:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:30:02.826420 | orchestrator | 2026-04-13 01:30:02 | ERROR  | Dropping encrypted entries 2026-04-13 01:30:05.257882 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:30:05.257981 | orchestrator | | ID | Name | Status | 2026-04-13 01:30:05.257996 | orchestrator | |--------------------------------------+--------+----------| 2026-04-13 01:30:05.258007 | orchestrator | | b09b4282-fa9c-4d06-8825-54dbfb06279d | test-3 | ACTIVE | 2026-04-13 01:30:05.258081 | orchestrator | | 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 | test-1 | ACTIVE | 2026-04-13 01:30:05.258094 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:30:05.616880 | orchestrator | + osism manage compute list testbed-node-4 2026-04-13 01:30:07.252780 | orchestrator | 2026-04-13 01:30:07 | ERROR  | Unable to get ansible vault password 2026-04-13 01:30:07.252856 | orchestrator | 2026-04-13 01:30:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:30:07.252864 | orchestrator | 2026-04-13 01:30:07 | ERROR  | Dropping encrypted entries 2026-04-13 01:30:08.532621 | orchestrator | +------+--------+----------+ 2026-04-13 01:30:08.532717 | orchestrator | | ID | Name | Status | 2026-04-13 01:30:08.532733 | orchestrator | |------+--------+----------| 2026-04-13 01:30:08.532745 | orchestrator | +------+--------+----------+ 2026-04-13 01:30:08.757199 | orchestrator | + osism manage compute list testbed-node-5 2026-04-13 01:30:10.187330 | orchestrator | 2026-04-13 01:30:10 | ERROR  | Unable to get ansible vault password 2026-04-13 01:30:10.187438 | orchestrator | 2026-04-13 01:30:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:30:10.187465 | orchestrator | 2026-04-13 01:30:10 | ERROR  | Dropping encrypted entries 2026-04-13 01:30:11.639453 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:30:11.639519 | orchestrator | | ID | Name | Status | 2026-04-13 01:30:11.639529 | orchestrator | |--------------------------------------+--------+----------| 2026-04-13 01:30:11.639537 | orchestrator | | de531e7b-bf01-4329-a336-47ad25575537 | test-4 | ACTIVE | 2026-04-13 01:30:11.639545 | orchestrator | | c223eb87-e5cc-427d-a86b-cef45ba92bbe | test-2 | ACTIVE | 2026-04-13 01:30:11.639553 | orchestrator | | 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 | test | ACTIVE | 2026-04-13 01:30:11.639608 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:30:11.857483 | orchestrator | + server_ping 2026-04-13 01:30:11.858966 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-13 01:30:11.859005 | orchestrator | ++ tr -d '\r' 2026-04-13 01:30:14.798523 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:30:14.798698 | orchestrator | + ping -c3 192.168.112.104 2026-04-13 01:30:14.808457 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-04-13 01:30:14.808553 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=5.55 ms 2026-04-13 01:30:15.807182 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.40 ms 2026-04-13 01:30:16.808474 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.88 ms 2026-04-13 01:30:16.808598 | orchestrator | 2026-04-13 01:30:16.808616 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-04-13 01:30:16.808629 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:30:16.808640 | orchestrator | rtt min/avg/max/mdev = 1.881/3.276/5.550/1.621 ms 2026-04-13 01:30:16.809188 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:30:16.809212 | orchestrator | + ping -c3 192.168.112.157 2026-04-13 01:30:16.824099 | orchestrator | PING 192.168.112.157 (192.168.112.157) 56(84) bytes of data. 2026-04-13 01:30:16.824193 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=1 ttl=63 time=11.0 ms 2026-04-13 01:30:17.817096 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=2 ttl=63 time=2.54 ms 2026-04-13 01:30:18.818839 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=3 ttl=63 time=2.14 ms 2026-04-13 01:30:18.818931 | orchestrator | 2026-04-13 01:30:18.818946 | orchestrator | --- 192.168.112.157 ping statistics --- 2026-04-13 01:30:18.818958 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:30:18.818968 | orchestrator | rtt min/avg/max/mdev = 2.138/5.211/10.958/4.066 ms 2026-04-13 01:30:18.818979 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:30:18.818989 | orchestrator | + ping -c3 192.168.112.179 2026-04-13 01:30:18.831638 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-13 01:30:18.831735 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.91 ms 2026-04-13 01:30:19.828551 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.62 ms 2026-04-13 01:30:20.830194 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.20 ms 2026-04-13 01:30:20.831328 | orchestrator | 2026-04-13 01:30:20.831404 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-13 01:30:20.831418 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:30:20.831429 | orchestrator | rtt min/avg/max/mdev = 2.202/3.911/6.913/2.129 ms 2026-04-13 01:30:20.831464 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:30:20.831482 | orchestrator | + ping -c3 192.168.112.128 2026-04-13 01:30:20.848331 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-13 01:30:20.848426 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=12.4 ms 2026-04-13 01:30:21.840340 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.55 ms 2026-04-13 01:30:22.842489 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=2.19 ms 2026-04-13 01:30:22.842619 | orchestrator | 2026-04-13 01:30:22.842636 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-13 01:30:22.842646 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-13 01:30:22.842656 | orchestrator | rtt min/avg/max/mdev = 2.189/5.724/12.432/4.745 ms 2026-04-13 01:30:22.842666 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:30:22.842675 | orchestrator | + ping -c3 192.168.112.171 2026-04-13 01:30:22.857045 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-04-13 01:30:22.857115 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=9.88 ms 2026-04-13 01:30:23.851350 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.85 ms 2026-04-13 01:30:24.852043 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.27 ms 2026-04-13 01:30:24.852156 | orchestrator | 2026-04-13 01:30:24.852173 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-04-13 01:30:24.852186 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-13 01:30:24.852198 | orchestrator | rtt min/avg/max/mdev = 2.270/4.999/9.878/3.458 ms 2026-04-13 01:30:24.853089 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-13 01:30:26.484820 | orchestrator | 2026-04-13 01:30:26 | ERROR  | Unable to get ansible vault password 2026-04-13 01:30:26.484938 | orchestrator | 2026-04-13 01:30:26 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:30:26.484956 | orchestrator | 2026-04-13 01:30:26 | ERROR  | Dropping encrypted entries 2026-04-13 01:30:28.149972 | orchestrator | 2026-04-13 01:30:28 | INFO  | Live migrating server de531e7b-bf01-4329-a336-47ad25575537 2026-04-13 01:30:41.365081 | orchestrator | 2026-04-13 01:30:41 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:30:43.774771 | orchestrator | 2026-04-13 01:30:43 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:30:46.116178 | orchestrator | 2026-04-13 01:30:46 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:30:48.817663 | orchestrator | 2026-04-13 01:30:48 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:30:51.133822 | orchestrator | 2026-04-13 01:30:51 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:30:53.428799 | orchestrator | 2026-04-13 01:30:53 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:30:55.783221 | orchestrator | 2026-04-13 01:30:55 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:30:58.286873 | orchestrator | 2026-04-13 01:30:58 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:31:00.565322 | orchestrator | 2026-04-13 01:31:00 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) completed with status ACTIVE 2026-04-13 01:31:00.565400 | orchestrator | 2026-04-13 01:31:00 | INFO  | Live migrating server c223eb87-e5cc-427d-a86b-cef45ba92bbe 2026-04-13 01:31:12.526936 | orchestrator | 2026-04-13 01:31:12 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:14.887736 | orchestrator | 2026-04-13 01:31:14 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:17.355840 | orchestrator | 2026-04-13 01:31:17 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:19.761164 | orchestrator | 2026-04-13 01:31:19 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:22.142326 | orchestrator | 2026-04-13 01:31:22 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:24.419179 | orchestrator | 2026-04-13 01:31:24 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:26.729077 | orchestrator | 2026-04-13 01:31:26 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:29.035099 | orchestrator | 2026-04-13 01:31:29 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:31.353278 | orchestrator | 2026-04-13 01:31:31 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:31:33.670671 | orchestrator | 2026-04-13 01:31:33 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) completed with status ACTIVE 2026-04-13 01:31:33.670834 | orchestrator | 2026-04-13 01:31:33 | INFO  | Live migrating server 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 2026-04-13 01:31:44.668158 | orchestrator | 2026-04-13 01:31:44 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:31:47.091028 | orchestrator | 2026-04-13 01:31:47 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:31:49.513612 | orchestrator | 2026-04-13 01:31:49 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:31:51.933044 | orchestrator | 2026-04-13 01:31:51 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:31:54.286174 | orchestrator | 2026-04-13 01:31:54 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:31:56.614307 | orchestrator | 2026-04-13 01:31:56 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:31:59.018549 | orchestrator | 2026-04-13 01:31:59 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:32:01.455963 | orchestrator | 2026-04-13 01:32:01 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:32:04.064156 | orchestrator | 2026-04-13 01:32:04 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:32:06.364214 | orchestrator | 2026-04-13 01:32:06 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:32:09.091516 | orchestrator | 2026-04-13 01:32:09 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:32:11.401352 | orchestrator | 2026-04-13 01:32:11 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) completed with status ACTIVE 2026-04-13 01:32:11.742298 | orchestrator | + compute_list 2026-04-13 01:32:11.742398 | orchestrator | + osism manage compute list testbed-node-3 2026-04-13 01:32:13.374275 | orchestrator | 2026-04-13 01:32:13 | ERROR  | Unable to get ansible vault password 2026-04-13 01:32:13.374365 | orchestrator | 2026-04-13 01:32:13 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:32:13.374380 | orchestrator | 2026-04-13 01:32:13 | ERROR  | Dropping encrypted entries 2026-04-13 01:32:15.234080 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:32:15.234191 | orchestrator | | ID | Name | Status | 2026-04-13 01:32:15.234207 | orchestrator | |--------------------------------------+--------+----------| 2026-04-13 01:32:15.234220 | orchestrator | | de531e7b-bf01-4329-a336-47ad25575537 | test-4 | ACTIVE | 2026-04-13 01:32:15.234231 | orchestrator | | b09b4282-fa9c-4d06-8825-54dbfb06279d | test-3 | ACTIVE | 2026-04-13 01:32:15.234242 | orchestrator | | 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 | test-1 | ACTIVE | 2026-04-13 01:32:15.234254 | orchestrator | | c223eb87-e5cc-427d-a86b-cef45ba92bbe | test-2 | ACTIVE | 2026-04-13 01:32:15.234265 | orchestrator | | 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 | test | ACTIVE | 2026-04-13 01:32:15.234276 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:32:15.585509 | orchestrator | + osism manage compute list testbed-node-4 2026-04-13 01:32:17.148988 | orchestrator | 2026-04-13 01:32:17 | ERROR  | Unable to get ansible vault password 2026-04-13 01:32:17.149053 | orchestrator | 2026-04-13 01:32:17 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:32:17.149061 | orchestrator | 2026-04-13 01:32:17 | ERROR  | Dropping encrypted entries 2026-04-13 01:32:18.419555 | orchestrator | +------+--------+----------+ 2026-04-13 01:32:18.419642 | orchestrator | | ID | Name | Status | 2026-04-13 01:32:18.419653 | orchestrator | |------+--------+----------| 2026-04-13 01:32:18.419662 | orchestrator | +------+--------+----------+ 2026-04-13 01:32:18.791880 | orchestrator | + osism manage compute list testbed-node-5 2026-04-13 01:32:20.478908 | orchestrator | 2026-04-13 01:32:20 | ERROR  | Unable to get ansible vault password 2026-04-13 01:32:20.479802 | orchestrator | 2026-04-13 01:32:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:32:20.479837 | orchestrator | 2026-04-13 01:32:20 | ERROR  | Dropping encrypted entries 2026-04-13 01:32:21.691327 | orchestrator | +------+--------+----------+ 2026-04-13 01:32:21.691457 | orchestrator | | ID | Name | Status | 2026-04-13 01:32:21.691483 | orchestrator | |------+--------+----------| 2026-04-13 01:32:21.691500 | orchestrator | +------+--------+----------+ 2026-04-13 01:32:22.035148 | orchestrator | + server_ping 2026-04-13 01:32:22.036737 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-13 01:32:22.036837 | orchestrator | ++ tr -d '\r' 2026-04-13 01:32:24.930545 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:32:24.930645 | orchestrator | + ping -c3 192.168.112.104 2026-04-13 01:32:24.943011 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-04-13 01:32:24.943125 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=9.35 ms 2026-04-13 01:32:25.938283 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.76 ms 2026-04-13 01:32:26.939227 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=2.08 ms 2026-04-13 01:32:26.939423 | orchestrator | 2026-04-13 01:32:26.939442 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-04-13 01:32:26.939452 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:32:26.939460 | orchestrator | rtt min/avg/max/mdev = 2.077/4.729/9.350/3.279 ms 2026-04-13 01:32:26.939478 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:32:26.939487 | orchestrator | + ping -c3 192.168.112.157 2026-04-13 01:32:26.952675 | orchestrator | PING 192.168.112.157 (192.168.112.157) 56(84) bytes of data. 2026-04-13 01:32:26.952803 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=1 ttl=63 time=7.43 ms 2026-04-13 01:32:27.949724 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=2 ttl=63 time=2.52 ms 2026-04-13 01:32:28.951108 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=3 ttl=63 time=2.00 ms 2026-04-13 01:32:28.951214 | orchestrator | 2026-04-13 01:32:28.951235 | orchestrator | --- 192.168.112.157 ping statistics --- 2026-04-13 01:32:28.951423 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-13 01:32:28.951438 | orchestrator | rtt min/avg/max/mdev = 1.999/3.982/7.429/2.446 ms 2026-04-13 01:32:28.951463 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:32:28.951476 | orchestrator | + ping -c3 192.168.112.179 2026-04-13 01:32:28.964863 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-13 01:32:28.964950 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=9.81 ms 2026-04-13 01:32:29.958815 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.57 ms 2026-04-13 01:32:30.960518 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.28 ms 2026-04-13 01:32:30.961317 | orchestrator | 2026-04-13 01:32:30.961337 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-13 01:32:30.961344 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:32:30.961349 | orchestrator | rtt min/avg/max/mdev = 2.280/4.885/9.812/3.485 ms 2026-04-13 01:32:30.961363 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:32:30.961369 | orchestrator | + ping -c3 192.168.112.128 2026-04-13 01:32:30.978296 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-13 01:32:30.978373 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=11.8 ms 2026-04-13 01:32:31.970932 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=3.06 ms 2026-04-13 01:32:32.971210 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=2.08 ms 2026-04-13 01:32:32.971314 | orchestrator | 2026-04-13 01:32:32.971471 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-13 01:32:32.971493 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-13 01:32:32.971505 | orchestrator | rtt min/avg/max/mdev = 2.082/5.652/11.817/4.377 ms 2026-04-13 01:32:32.971531 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:32:32.971544 | orchestrator | + ping -c3 192.168.112.171 2026-04-13 01:32:32.983667 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-04-13 01:32:32.983749 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=6.73 ms 2026-04-13 01:32:33.982412 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.81 ms 2026-04-13 01:32:34.983347 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.26 ms 2026-04-13 01:32:34.983444 | orchestrator | 2026-04-13 01:32:34.983485 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-04-13 01:32:34.983496 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:32:34.983505 | orchestrator | rtt min/avg/max/mdev = 2.261/3.932/6.728/1.989 ms 2026-04-13 01:32:34.983514 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-13 01:32:36.584106 | orchestrator | 2026-04-13 01:32:36 | ERROR  | Unable to get ansible vault password 2026-04-13 01:32:36.584229 | orchestrator | 2026-04-13 01:32:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:32:36.584248 | orchestrator | 2026-04-13 01:32:36 | ERROR  | Dropping encrypted entries 2026-04-13 01:32:38.288312 | orchestrator | 2026-04-13 01:32:38 | INFO  | Live migrating server de531e7b-bf01-4329-a336-47ad25575537 2026-04-13 01:32:51.978211 | orchestrator | 2026-04-13 01:32:51 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:32:54.387031 | orchestrator | 2026-04-13 01:32:54 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:32:56.952172 | orchestrator | 2026-04-13 01:32:56 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:32:59.312931 | orchestrator | 2026-04-13 01:32:59 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:01.748892 | orchestrator | 2026-04-13 01:33:01 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:04.045646 | orchestrator | 2026-04-13 01:33:04 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:06.422679 | orchestrator | 2026-04-13 01:33:06 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:08.818284 | orchestrator | 2026-04-13 01:33:08 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:11.190056 | orchestrator | 2026-04-13 01:33:11 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:13.510206 | orchestrator | 2026-04-13 01:33:13 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:15.813872 | orchestrator | 2026-04-13 01:33:15 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:33:18.132307 | orchestrator | 2026-04-13 01:33:18 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) completed with status ACTIVE 2026-04-13 01:33:18.132405 | orchestrator | 2026-04-13 01:33:18 | INFO  | Live migrating server b09b4282-fa9c-4d06-8825-54dbfb06279d 2026-04-13 01:33:30.376387 | orchestrator | 2026-04-13 01:33:30 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:32.727255 | orchestrator | 2026-04-13 01:33:32 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:35.173561 | orchestrator | 2026-04-13 01:33:35 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:37.470722 | orchestrator | 2026-04-13 01:33:37 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:39.732205 | orchestrator | 2026-04-13 01:33:39 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:42.110476 | orchestrator | 2026-04-13 01:33:42 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:44.418617 | orchestrator | 2026-04-13 01:33:44 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:46.711606 | orchestrator | 2026-04-13 01:33:46 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:33:49.026279 | orchestrator | 2026-04-13 01:33:49 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) completed with status ACTIVE 2026-04-13 01:33:49.026392 | orchestrator | 2026-04-13 01:33:49 | INFO  | Live migrating server 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 2026-04-13 01:34:02.114718 | orchestrator | 2026-04-13 01:34:02 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:04.461171 | orchestrator | 2026-04-13 01:34:04 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:06.790299 | orchestrator | 2026-04-13 01:34:06 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:09.186330 | orchestrator | 2026-04-13 01:34:09 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:11.522330 | orchestrator | 2026-04-13 01:34:11 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:13.826282 | orchestrator | 2026-04-13 01:34:13 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:16.153572 | orchestrator | 2026-04-13 01:34:16 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:18.491485 | orchestrator | 2026-04-13 01:34:18 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:20.808409 | orchestrator | 2026-04-13 01:34:20 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:34:23.255865 | orchestrator | 2026-04-13 01:34:23 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) completed with status ACTIVE 2026-04-13 01:34:23.256717 | orchestrator | 2026-04-13 01:34:23 | INFO  | Live migrating server c223eb87-e5cc-427d-a86b-cef45ba92bbe 2026-04-13 01:34:34.481411 | orchestrator | 2026-04-13 01:34:34 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:36.885116 | orchestrator | 2026-04-13 01:34:36 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:39.278157 | orchestrator | 2026-04-13 01:34:39 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:41.836891 | orchestrator | 2026-04-13 01:34:41 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:44.183122 | orchestrator | 2026-04-13 01:34:44 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:46.519690 | orchestrator | 2026-04-13 01:34:46 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:48.801523 | orchestrator | 2026-04-13 01:34:48 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:51.220231 | orchestrator | 2026-04-13 01:34:51 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:34:53.550439 | orchestrator | 2026-04-13 01:34:53 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) completed with status ACTIVE 2026-04-13 01:34:53.550575 | orchestrator | 2026-04-13 01:34:53 | INFO  | Live migrating server 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 2026-04-13 01:35:04.162005 | orchestrator | 2026-04-13 01:35:04 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:06.524323 | orchestrator | 2026-04-13 01:35:06 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:08.957619 | orchestrator | 2026-04-13 01:35:08 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:11.217034 | orchestrator | 2026-04-13 01:35:11 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:13.520431 | orchestrator | 2026-04-13 01:35:13 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:15.944083 | orchestrator | 2026-04-13 01:35:15 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:18.245716 | orchestrator | 2026-04-13 01:35:18 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:20.612723 | orchestrator | 2026-04-13 01:35:20 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:22.963012 | orchestrator | 2026-04-13 01:35:22 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:25.391445 | orchestrator | 2026-04-13 01:35:25 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:35:27.690673 | orchestrator | 2026-04-13 01:35:27 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) completed with status ACTIVE 2026-04-13 01:35:28.081332 | orchestrator | + compute_list 2026-04-13 01:35:28.081431 | orchestrator | + osism manage compute list testbed-node-3 2026-04-13 01:35:29.692587 | orchestrator | 2026-04-13 01:35:29 | ERROR  | Unable to get ansible vault password 2026-04-13 01:35:29.692709 | orchestrator | 2026-04-13 01:35:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:35:29.692728 | orchestrator | 2026-04-13 01:35:29 | ERROR  | Dropping encrypted entries 2026-04-13 01:35:31.046452 | orchestrator | +------+--------+----------+ 2026-04-13 01:35:31.046534 | orchestrator | | ID | Name | Status | 2026-04-13 01:35:31.046544 | orchestrator | |------+--------+----------| 2026-04-13 01:35:31.046552 | orchestrator | +------+--------+----------+ 2026-04-13 01:35:31.414613 | orchestrator | + osism manage compute list testbed-node-4 2026-04-13 01:35:33.104363 | orchestrator | 2026-04-13 01:35:33 | ERROR  | Unable to get ansible vault password 2026-04-13 01:35:33.104520 | orchestrator | 2026-04-13 01:35:33 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:35:33.105274 | orchestrator | 2026-04-13 01:35:33 | ERROR  | Dropping encrypted entries 2026-04-13 01:35:34.862827 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:35:34.862921 | orchestrator | | ID | Name | Status | 2026-04-13 01:35:34.862993 | orchestrator | |--------------------------------------+--------+----------| 2026-04-13 01:35:34.863010 | orchestrator | | de531e7b-bf01-4329-a336-47ad25575537 | test-4 | ACTIVE | 2026-04-13 01:35:34.863026 | orchestrator | | b09b4282-fa9c-4d06-8825-54dbfb06279d | test-3 | ACTIVE | 2026-04-13 01:35:34.863035 | orchestrator | | 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 | test-1 | ACTIVE | 2026-04-13 01:35:34.863043 | orchestrator | | c223eb87-e5cc-427d-a86b-cef45ba92bbe | test-2 | ACTIVE | 2026-04-13 01:35:34.863052 | orchestrator | | 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 | test | ACTIVE | 2026-04-13 01:35:34.863087 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:35:35.220755 | orchestrator | + osism manage compute list testbed-node-5 2026-04-13 01:35:36.857778 | orchestrator | 2026-04-13 01:35:36 | ERROR  | Unable to get ansible vault password 2026-04-13 01:35:36.857855 | orchestrator | 2026-04-13 01:35:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:35:36.857866 | orchestrator | 2026-04-13 01:35:36 | ERROR  | Dropping encrypted entries 2026-04-13 01:35:38.081537 | orchestrator | +------+--------+----------+ 2026-04-13 01:35:38.081715 | orchestrator | | ID | Name | Status | 2026-04-13 01:35:38.081743 | orchestrator | |------+--------+----------| 2026-04-13 01:35:38.081763 | orchestrator | +------+--------+----------+ 2026-04-13 01:35:38.442474 | orchestrator | + server_ping 2026-04-13 01:35:38.443116 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-13 01:35:38.443230 | orchestrator | ++ tr -d '\r' 2026-04-13 01:35:41.844215 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:35:41.844338 | orchestrator | + ping -c3 192.168.112.104 2026-04-13 01:35:41.860356 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-04-13 01:35:41.860439 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=11.3 ms 2026-04-13 01:35:42.853596 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-13 01:35:43.854624 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.65 ms 2026-04-13 01:35:43.854714 | orchestrator | 2026-04-13 01:35:43.854726 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-04-13 01:35:43.854737 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:35:43.854747 | orchestrator | rtt min/avg/max/mdev = 1.654/5.084/11.259/4.374 ms 2026-04-13 01:35:43.854757 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:35:43.854766 | orchestrator | + ping -c3 192.168.112.157 2026-04-13 01:35:43.868004 | orchestrator | PING 192.168.112.157 (192.168.112.157) 56(84) bytes of data. 2026-04-13 01:35:43.868072 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=1 ttl=63 time=9.42 ms 2026-04-13 01:35:44.862861 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=2 ttl=63 time=2.57 ms 2026-04-13 01:35:45.863762 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=3 ttl=63 time=2.14 ms 2026-04-13 01:35:45.864008 | orchestrator | 2026-04-13 01:35:45.864043 | orchestrator | --- 192.168.112.157 ping statistics --- 2026-04-13 01:35:45.864063 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:35:45.864085 | orchestrator | rtt min/avg/max/mdev = 2.142/4.710/9.421/3.335 ms 2026-04-13 01:35:45.864231 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:35:45.864258 | orchestrator | + ping -c3 192.168.112.179 2026-04-13 01:35:45.881648 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-13 01:35:45.881736 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=11.5 ms 2026-04-13 01:35:46.874804 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.75 ms 2026-04-13 01:35:47.876668 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.21 ms 2026-04-13 01:35:47.876765 | orchestrator | 2026-04-13 01:35:47.876781 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-13 01:35:47.876794 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-13 01:35:47.876804 | orchestrator | rtt min/avg/max/mdev = 2.213/5.491/11.510/4.261 ms 2026-04-13 01:35:47.876815 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:35:47.876826 | orchestrator | + ping -c3 192.168.112.128 2026-04-13 01:35:47.891027 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-13 01:35:47.891129 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=9.29 ms 2026-04-13 01:35:48.885936 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.58 ms 2026-04-13 01:35:49.886177 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.78 ms 2026-04-13 01:35:49.886386 | orchestrator | 2026-04-13 01:35:49.886412 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-13 01:35:49.886424 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:35:49.886435 | orchestrator | rtt min/avg/max/mdev = 1.778/4.547/9.290/3.369 ms 2026-04-13 01:35:49.886537 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:35:49.886552 | orchestrator | + ping -c3 192.168.112.171 2026-04-13 01:35:49.903312 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-04-13 01:35:49.903496 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=10.1 ms 2026-04-13 01:35:50.896894 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.96 ms 2026-04-13 01:35:51.897060 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=1.87 ms 2026-04-13 01:35:51.897157 | orchestrator | 2026-04-13 01:35:51.897174 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-04-13 01:35:51.897187 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-13 01:35:51.897199 | orchestrator | rtt min/avg/max/mdev = 1.867/4.964/10.070/3.637 ms 2026-04-13 01:35:51.897614 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-13 01:35:53.506233 | orchestrator | 2026-04-13 01:35:53 | ERROR  | Unable to get ansible vault password 2026-04-13 01:35:53.506383 | orchestrator | 2026-04-13 01:35:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:35:53.506405 | orchestrator | 2026-04-13 01:35:53 | ERROR  | Dropping encrypted entries 2026-04-13 01:35:55.205729 | orchestrator | 2026-04-13 01:35:55 | INFO  | Live migrating server de531e7b-bf01-4329-a336-47ad25575537 2026-04-13 01:36:05.611067 | orchestrator | 2026-04-13 01:36:05 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:08.013808 | orchestrator | 2026-04-13 01:36:08 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:10.397101 | orchestrator | 2026-04-13 01:36:10 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:12.767646 | orchestrator | 2026-04-13 01:36:12 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:15.052034 | orchestrator | 2026-04-13 01:36:15 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:17.392674 | orchestrator | 2026-04-13 01:36:17 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:19.717460 | orchestrator | 2026-04-13 01:36:19 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:22.014079 | orchestrator | 2026-04-13 01:36:22 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) is still in progress 2026-04-13 01:36:24.336166 | orchestrator | 2026-04-13 01:36:24 | INFO  | Live migration of de531e7b-bf01-4329-a336-47ad25575537 (test-4) completed with status ACTIVE 2026-04-13 01:36:24.336251 | orchestrator | 2026-04-13 01:36:24 | INFO  | Live migrating server b09b4282-fa9c-4d06-8825-54dbfb06279d 2026-04-13 01:36:34.088581 | orchestrator | 2026-04-13 01:36:34 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:36.480977 | orchestrator | 2026-04-13 01:36:36 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:38.880484 | orchestrator | 2026-04-13 01:36:38 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:41.263501 | orchestrator | 2026-04-13 01:36:41 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:43.612231 | orchestrator | 2026-04-13 01:36:43 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:45.924747 | orchestrator | 2026-04-13 01:36:45 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:48.223648 | orchestrator | 2026-04-13 01:36:48 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:50.521537 | orchestrator | 2026-04-13 01:36:50 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) is still in progress 2026-04-13 01:36:52.847615 | orchestrator | 2026-04-13 01:36:52 | INFO  | Live migration of b09b4282-fa9c-4d06-8825-54dbfb06279d (test-3) completed with status ACTIVE 2026-04-13 01:36:52.847723 | orchestrator | 2026-04-13 01:36:52 | INFO  | Live migrating server 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 2026-04-13 01:37:03.591682 | orchestrator | 2026-04-13 01:37:03 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:06.078336 | orchestrator | 2026-04-13 01:37:06 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:08.496560 | orchestrator | 2026-04-13 01:37:08 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:10.866440 | orchestrator | 2026-04-13 01:37:10 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:13.232872 | orchestrator | 2026-04-13 01:37:13 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:15.708338 | orchestrator | 2026-04-13 01:37:15 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:18.034153 | orchestrator | 2026-04-13 01:37:18 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:20.311296 | orchestrator | 2026-04-13 01:37:20 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:22.650721 | orchestrator | 2026-04-13 01:37:22 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) is still in progress 2026-04-13 01:37:25.173106 | orchestrator | 2026-04-13 01:37:25 | INFO  | Live migration of 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 (test-1) completed with status ACTIVE 2026-04-13 01:37:25.173227 | orchestrator | 2026-04-13 01:37:25 | INFO  | Live migrating server c223eb87-e5cc-427d-a86b-cef45ba92bbe 2026-04-13 01:37:35.583160 | orchestrator | 2026-04-13 01:37:35 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:37.962110 | orchestrator | 2026-04-13 01:37:37 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:40.309011 | orchestrator | 2026-04-13 01:37:40 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:42.568938 | orchestrator | 2026-04-13 01:37:42 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:44.855793 | orchestrator | 2026-04-13 01:37:44 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:47.100645 | orchestrator | 2026-04-13 01:37:47 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:49.428576 | orchestrator | 2026-04-13 01:37:49 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:51.823795 | orchestrator | 2026-04-13 01:37:51 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) is still in progress 2026-04-13 01:37:54.277310 | orchestrator | 2026-04-13 01:37:54 | INFO  | Live migration of c223eb87-e5cc-427d-a86b-cef45ba92bbe (test-2) completed with status ACTIVE 2026-04-13 01:37:54.277442 | orchestrator | 2026-04-13 01:37:54 | INFO  | Live migrating server 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 2026-04-13 01:38:05.117893 | orchestrator | 2026-04-13 01:38:05 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:07.524337 | orchestrator | 2026-04-13 01:38:07 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:10.035442 | orchestrator | 2026-04-13 01:38:10 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:12.424025 | orchestrator | 2026-04-13 01:38:12 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:14.765273 | orchestrator | 2026-04-13 01:38:14 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:17.161294 | orchestrator | 2026-04-13 01:38:17 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:19.445128 | orchestrator | 2026-04-13 01:38:19 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:21.763635 | orchestrator | 2026-04-13 01:38:21 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:24.123750 | orchestrator | 2026-04-13 01:38:24 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:26.424981 | orchestrator | 2026-04-13 01:38:26 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) is still in progress 2026-04-13 01:38:28.821876 | orchestrator | 2026-04-13 01:38:28 | INFO  | Live migration of 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 (test) completed with status ACTIVE 2026-04-13 01:38:29.155853 | orchestrator | + compute_list 2026-04-13 01:38:29.155961 | orchestrator | + osism manage compute list testbed-node-3 2026-04-13 01:38:30.764277 | orchestrator | 2026-04-13 01:38:30 | ERROR  | Unable to get ansible vault password 2026-04-13 01:38:30.764379 | orchestrator | 2026-04-13 01:38:30 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:38:30.764397 | orchestrator | 2026-04-13 01:38:30 | ERROR  | Dropping encrypted entries 2026-04-13 01:38:31.999789 | orchestrator | +------+--------+----------+ 2026-04-13 01:38:31.999902 | orchestrator | | ID | Name | Status | 2026-04-13 01:38:31.999919 | orchestrator | |------+--------+----------| 2026-04-13 01:38:31.999931 | orchestrator | +------+--------+----------+ 2026-04-13 01:38:32.351848 | orchestrator | + osism manage compute list testbed-node-4 2026-04-13 01:38:33.977429 | orchestrator | 2026-04-13 01:38:33 | ERROR  | Unable to get ansible vault password 2026-04-13 01:38:33.977579 | orchestrator | 2026-04-13 01:38:33 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:38:33.977607 | orchestrator | 2026-04-13 01:38:33 | ERROR  | Dropping encrypted entries 2026-04-13 01:38:35.194872 | orchestrator | +------+--------+----------+ 2026-04-13 01:38:35.194985 | orchestrator | | ID | Name | Status | 2026-04-13 01:38:35.195029 | orchestrator | |------+--------+----------| 2026-04-13 01:38:35.195042 | orchestrator | +------+--------+----------+ 2026-04-13 01:38:35.544264 | orchestrator | + osism manage compute list testbed-node-5 2026-04-13 01:38:37.123117 | orchestrator | 2026-04-13 01:38:37 | ERROR  | Unable to get ansible vault password 2026-04-13 01:38:37.123232 | orchestrator | 2026-04-13 01:38:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-13 01:38:37.123260 | orchestrator | 2026-04-13 01:38:37 | ERROR  | Dropping encrypted entries 2026-04-13 01:38:38.824988 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:38:38.825139 | orchestrator | | ID | Name | Status | 2026-04-13 01:38:38.825159 | orchestrator | |--------------------------------------+--------+----------| 2026-04-13 01:38:38.825170 | orchestrator | | de531e7b-bf01-4329-a336-47ad25575537 | test-4 | ACTIVE | 2026-04-13 01:38:38.825181 | orchestrator | | b09b4282-fa9c-4d06-8825-54dbfb06279d | test-3 | ACTIVE | 2026-04-13 01:38:38.825192 | orchestrator | | 4b5d6bc5-fac4-4668-9192-aaf58b8a3752 | test-1 | ACTIVE | 2026-04-13 01:38:38.825202 | orchestrator | | c223eb87-e5cc-427d-a86b-cef45ba92bbe | test-2 | ACTIVE | 2026-04-13 01:38:38.825213 | orchestrator | | 626b2dbe-4416-4090-ad9e-783cb7f1b0e1 | test | ACTIVE | 2026-04-13 01:38:38.825225 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-13 01:38:39.153904 | orchestrator | + server_ping 2026-04-13 01:38:39.155119 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-13 01:38:39.155173 | orchestrator | ++ tr -d '\r' 2026-04-13 01:38:42.014284 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:38:42.014379 | orchestrator | + ping -c3 192.168.112.104 2026-04-13 01:38:42.023282 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-04-13 01:38:42.023377 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=5.92 ms 2026-04-13 01:38:43.020409 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.33 ms 2026-04-13 01:38:44.020744 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.82 ms 2026-04-13 01:38:44.020846 | orchestrator | 2026-04-13 01:38:44.020862 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-04-13 01:38:44.020876 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-13 01:38:44.020909 | orchestrator | rtt min/avg/max/mdev = 1.817/3.358/5.924/1.826 ms 2026-04-13 01:38:44.021365 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:38:44.021400 | orchestrator | + ping -c3 192.168.112.157 2026-04-13 01:38:44.031771 | orchestrator | PING 192.168.112.157 (192.168.112.157) 56(84) bytes of data. 2026-04-13 01:38:44.031853 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=1 ttl=63 time=7.51 ms 2026-04-13 01:38:45.028412 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=2 ttl=63 time=3.08 ms 2026-04-13 01:38:46.028509 | orchestrator | 64 bytes from 192.168.112.157: icmp_seq=3 ttl=63 time=1.73 ms 2026-04-13 01:38:46.028640 | orchestrator | 2026-04-13 01:38:46.028669 | orchestrator | --- 192.168.112.157 ping statistics --- 2026-04-13 01:38:46.028691 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-13 01:38:46.028711 | orchestrator | rtt min/avg/max/mdev = 1.733/4.106/7.509/2.467 ms 2026-04-13 01:38:46.028782 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:38:46.028803 | orchestrator | + ping -c3 192.168.112.179 2026-04-13 01:38:46.041160 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-13 01:38:46.041237 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.94 ms 2026-04-13 01:38:47.036924 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.18 ms 2026-04-13 01:38:48.038929 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.79 ms 2026-04-13 01:38:48.039029 | orchestrator | 2026-04-13 01:38:48.039046 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-13 01:38:48.039060 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-13 01:38:48.039148 | orchestrator | rtt min/avg/max/mdev = 1.790/3.969/7.940/2.811 ms 2026-04-13 01:38:48.039201 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:38:48.039215 | orchestrator | + ping -c3 192.168.112.128 2026-04-13 01:38:48.050151 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-13 01:38:48.050230 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=6.49 ms 2026-04-13 01:38:49.047556 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.04 ms 2026-04-13 01:38:50.048422 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.85 ms 2026-04-13 01:38:50.048526 | orchestrator | 2026-04-13 01:38:50.048539 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-13 01:38:50.048548 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-13 01:38:50.048555 | orchestrator | rtt min/avg/max/mdev = 1.848/3.457/6.487/2.143 ms 2026-04-13 01:38:50.049008 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-13 01:38:50.049027 | orchestrator | + ping -c3 192.168.112.171 2026-04-13 01:38:50.058467 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-04-13 01:38:50.058566 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=6.09 ms 2026-04-13 01:38:51.055759 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.14 ms 2026-04-13 01:38:52.056675 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=1.85 ms 2026-04-13 01:38:52.056780 | orchestrator | 2026-04-13 01:38:52.056796 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-04-13 01:38:52.056810 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-13 01:38:52.056822 | orchestrator | rtt min/avg/max/mdev = 1.846/3.359/6.089/1.933 ms 2026-04-13 01:38:52.169770 | orchestrator | ok: Runtime: 0:18:36.001377 2026-04-13 01:38:52.216551 | 2026-04-13 01:38:52.216693 | TASK [Run tempest] 2026-04-13 01:38:52.935995 | orchestrator | + set -e 2026-04-13 01:38:52.936204 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 01:38:52.936228 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 01:38:52.936239 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 01:38:52.936250 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 01:38:52.936261 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 01:38:52.936272 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 01:38:52.936309 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 01:38:52.936327 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-13 01:38:52.936345 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-13 01:38:52.936354 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 01:38:52.936369 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 01:38:52.936378 | orchestrator | ++ export ARA=false 2026-04-13 01:38:52.936387 | orchestrator | ++ ARA=false 2026-04-13 01:38:52.936399 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 01:38:52.936408 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 01:38:52.936416 | orchestrator | ++ export TEMPEST=true 2026-04-13 01:38:52.936429 | orchestrator | ++ TEMPEST=true 2026-04-13 01:38:52.936438 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 01:38:52.936446 | orchestrator | ++ IS_ZUUL=true 2026-04-13 01:38:52.936456 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 01:38:52.936466 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-04-13 01:38:52.936474 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 01:38:52.936483 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 01:38:52.937246 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 01:38:52.937265 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 01:38:52.937275 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 01:38:52.937286 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 01:38:52.937296 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 01:38:52.937306 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 01:38:52.937456 | orchestrator | + echo 2026-04-13 01:38:52.937745 | orchestrator | 2026-04-13 01:38:52.937767 | orchestrator | # Tempest 2026-04-13 01:38:52.937777 | orchestrator | 2026-04-13 01:38:52.937786 | orchestrator | + echo '# Tempest' 2026-04-13 01:38:52.937796 | orchestrator | + echo 2026-04-13 01:38:52.937805 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-13 01:38:52.937814 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-13 01:39:04.302867 | orchestrator | 2026-04-13 01:39:04 | INFO  | Prepare task for execution of tempest. 2026-04-13 01:39:04.390291 | orchestrator | 2026-04-13 01:39:04 | INFO  | Task aeab6fd5-b105-4c52-8ec7-3283189a2102 (tempest) was prepared for execution. 2026-04-13 01:39:04.390542 | orchestrator | 2026-04-13 01:39:04 | INFO  | It takes a moment until task aeab6fd5-b105-4c52-8ec7-3283189a2102 (tempest) has been started and output is visible here. 2026-04-13 01:40:26.266357 | orchestrator | 2026-04-13 01:40:26.266465 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-13 01:40:26.266481 | orchestrator | 2026-04-13 01:40:26.266492 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-13 01:40:26.266513 | orchestrator | Monday 13 April 2026 01:39:07 +0000 (0:00:00.345) 0:00:00.346 ********** 2026-04-13 01:40:26.266523 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.266533 | orchestrator | 2026-04-13 01:40:26.266544 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-13 01:40:26.266553 | orchestrator | Monday 13 April 2026 01:39:09 +0000 (0:00:01.122) 0:00:01.468 ********** 2026-04-13 01:40:26.266564 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.266579 | orchestrator | 2026-04-13 01:40:26.266603 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-13 01:40:26.266622 | orchestrator | Monday 13 April 2026 01:39:10 +0000 (0:00:01.242) 0:00:02.710 ********** 2026-04-13 01:40:26.266638 | orchestrator | ok: [testbed-manager] 2026-04-13 01:40:26.266656 | orchestrator | 2026-04-13 01:40:26.266672 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-13 01:40:26.266690 | orchestrator | Monday 13 April 2026 01:39:10 +0000 (0:00:00.467) 0:00:03.177 ********** 2026-04-13 01:40:26.266707 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.266725 | orchestrator | 2026-04-13 01:40:26.266735 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-13 01:40:26.266745 | orchestrator | Monday 13 April 2026 01:39:34 +0000 (0:00:23.660) 0:00:26.838 ********** 2026-04-13 01:40:26.266784 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-13 01:40:26.266795 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-13 01:40:26.266808 | orchestrator | 2026-04-13 01:40:26.266819 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-13 01:40:26.266828 | orchestrator | Monday 13 April 2026 01:39:42 +0000 (0:00:08.410) 0:00:35.248 ********** 2026-04-13 01:40:26.266838 | orchestrator | ok: [testbed-manager] => { 2026-04-13 01:40:26.266847 | orchestrator |  "changed": false, 2026-04-13 01:40:26.266857 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:40:26.266867 | orchestrator | } 2026-04-13 01:40:26.266877 | orchestrator | 2026-04-13 01:40:26.266886 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-13 01:40:26.266896 | orchestrator | Monday 13 April 2026 01:39:42 +0000 (0:00:00.167) 0:00:35.415 ********** 2026-04-13 01:40:26.266905 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.266915 | orchestrator | 2026-04-13 01:40:26.266924 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-13 01:40:26.266934 | orchestrator | Monday 13 April 2026 01:39:46 +0000 (0:00:03.825) 0:00:39.241 ********** 2026-04-13 01:40:26.266943 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.266953 | orchestrator | 2026-04-13 01:40:26.266962 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-13 01:40:26.266972 | orchestrator | Monday 13 April 2026 01:39:48 +0000 (0:00:02.075) 0:00:41.316 ********** 2026-04-13 01:40:26.266981 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.266990 | orchestrator | 2026-04-13 01:40:26.267000 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-13 01:40:26.267010 | orchestrator | Monday 13 April 2026 01:39:52 +0000 (0:00:04.073) 0:00:45.390 ********** 2026-04-13 01:40:26.267019 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.267028 | orchestrator | 2026-04-13 01:40:26.267038 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-13 01:40:26.267047 | orchestrator | Monday 13 April 2026 01:39:53 +0000 (0:00:00.194) 0:00:45.585 ********** 2026-04-13 01:40:26.267056 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.267066 | orchestrator | 2026-04-13 01:40:26.267076 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-13 01:40:26.267086 | orchestrator | Monday 13 April 2026 01:39:55 +0000 (0:00:02.700) 0:00:48.285 ********** 2026-04-13 01:40:26.267095 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.267105 | orchestrator | 2026-04-13 01:40:26.267114 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-13 01:40:26.267123 | orchestrator | Monday 13 April 2026 01:40:05 +0000 (0:00:09.964) 0:00:58.250 ********** 2026-04-13 01:40:26.267176 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.267193 | orchestrator | 2026-04-13 01:40:26.267209 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-13 01:40:26.267227 | orchestrator | Monday 13 April 2026 01:40:06 +0000 (0:00:00.732) 0:00:58.982 ********** 2026-04-13 01:40:26.267244 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.267260 | orchestrator | 2026-04-13 01:40:26.267276 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-13 01:40:26.267286 | orchestrator | Monday 13 April 2026 01:40:08 +0000 (0:00:01.559) 0:01:00.541 ********** 2026-04-13 01:40:26.267295 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.267304 | orchestrator | 2026-04-13 01:40:26.267314 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-13 01:40:26.267323 | orchestrator | Monday 13 April 2026 01:40:09 +0000 (0:00:01.431) 0:01:01.973 ********** 2026-04-13 01:40:26.267333 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.267342 | orchestrator | 2026-04-13 01:40:26.267352 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-13 01:40:26.267370 | orchestrator | Monday 13 April 2026 01:40:09 +0000 (0:00:00.173) 0:01:02.146 ********** 2026-04-13 01:40:26.267380 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.267389 | orchestrator | 2026-04-13 01:40:26.267406 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-13 01:40:26.267416 | orchestrator | Monday 13 April 2026 01:40:10 +0000 (0:00:00.322) 0:01:02.469 ********** 2026-04-13 01:40:26.267426 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:40:26.267435 | orchestrator | 2026-04-13 01:40:26.267445 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-13 01:40:26.267484 | orchestrator | Monday 13 April 2026 01:40:14 +0000 (0:00:04.088) 0:01:06.558 ********** 2026-04-13 01:40:26.267495 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-13 01:40:26.267505 | orchestrator |  "changed": false, 2026-04-13 01:40:26.267514 | orchestrator |  "msg": "All assertions passed" 2026-04-13 01:40:26.267524 | orchestrator | } 2026-04-13 01:40:26.267534 | orchestrator | 2026-04-13 01:40:26.267544 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-13 01:40:26.267554 | orchestrator | Monday 13 April 2026 01:40:14 +0000 (0:00:00.197) 0:01:06.755 ********** 2026-04-13 01:40:26.267564 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-13 01:40:26.267574 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-13 01:40:26.267584 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:40:26.267593 | orchestrator | 2026-04-13 01:40:26.267603 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-13 01:40:26.267612 | orchestrator | Monday 13 April 2026 01:40:14 +0000 (0:00:00.201) 0:01:06.956 ********** 2026-04-13 01:40:26.267622 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:40:26.267631 | orchestrator | 2026-04-13 01:40:26.267641 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-13 01:40:26.267650 | orchestrator | Monday 13 April 2026 01:40:14 +0000 (0:00:00.164) 0:01:07.121 ********** 2026-04-13 01:40:26.267660 | orchestrator | ok: [testbed-manager] 2026-04-13 01:40:26.267669 | orchestrator | 2026-04-13 01:40:26.267679 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-13 01:40:26.267689 | orchestrator | Monday 13 April 2026 01:40:15 +0000 (0:00:00.537) 0:01:07.658 ********** 2026-04-13 01:40:26.267699 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.267708 | orchestrator | 2026-04-13 01:40:26.267718 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-13 01:40:26.267727 | orchestrator | Monday 13 April 2026 01:40:16 +0000 (0:00:00.899) 0:01:08.558 ********** 2026-04-13 01:40:26.267737 | orchestrator | ok: [testbed-manager] 2026-04-13 01:40:26.267746 | orchestrator | 2026-04-13 01:40:26.267756 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-13 01:40:26.267765 | orchestrator | Monday 13 April 2026 01:40:16 +0000 (0:00:00.467) 0:01:09.025 ********** 2026-04-13 01:40:26.267774 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:40:26.267784 | orchestrator | 2026-04-13 01:40:26.267793 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-13 01:40:26.267803 | orchestrator | Monday 13 April 2026 01:40:16 +0000 (0:00:00.337) 0:01:09.362 ********** 2026-04-13 01:40:26.267813 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-13 01:40:26.267822 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-13 01:40:26.267832 | orchestrator | 2026-04-13 01:40:26.267842 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-13 01:40:26.267851 | orchestrator | Monday 13 April 2026 01:40:25 +0000 (0:00:08.226) 0:01:17.589 ********** 2026-04-13 01:40:26.267861 | orchestrator | changed: [testbed-manager] 2026-04-13 01:40:26.267877 | orchestrator | 2026-04-13 01:40:26.267887 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:40:26.267897 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 01:40:26.267908 | orchestrator | 2026-04-13 01:40:26.267917 | orchestrator | 2026-04-13 01:40:26.267927 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:40:26.267936 | orchestrator | Monday 13 April 2026 01:40:26 +0000 (0:00:01.068) 0:01:18.657 ********** 2026-04-13 01:40:26.267946 | orchestrator | =============================================================================== 2026-04-13 01:40:26.267955 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 23.66s 2026-04-13 01:40:26.267965 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 9.96s 2026-04-13 01:40:26.267974 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.41s 2026-04-13 01:40:26.267984 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.23s 2026-04-13 01:40:26.267998 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.09s 2026-04-13 01:40:26.268008 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 4.07s 2026-04-13 01:40:26.268018 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.83s 2026-04-13 01:40:26.268027 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.70s 2026-04-13 01:40:26.268037 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 2.08s 2026-04-13 01:40:26.268047 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.56s 2026-04-13 01:40:26.268056 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.43s 2026-04-13 01:40:26.268066 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.24s 2026-04-13 01:40:26.268075 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.12s 2026-04-13 01:40:26.268085 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.07s 2026-04-13 01:40:26.268094 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.90s 2026-04-13 01:40:26.268104 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.73s 2026-04-13 01:40:26.268114 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.54s 2026-04-13 01:40:26.268148 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.47s 2026-04-13 01:40:26.543114 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.47s 2026-04-13 01:40:26.543275 | orchestrator | osism.validations.tempest : Copy include list --------------------------- 0.34s 2026-04-13 01:40:26.764247 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-13 01:40:26.769305 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-13 01:40:26.773577 | orchestrator | 2026-04-13 01:40:26.773638 | orchestrator | ## IDENTITY (API) 2026-04-13 01:40:26.773653 | orchestrator | 2026-04-13 01:40:26.773664 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-13 01:40:26.773676 | orchestrator | + echo 2026-04-13 01:40:26.773688 | orchestrator | + echo '## IDENTITY (API)' 2026-04-13 01:40:26.773698 | orchestrator | + echo 2026-04-13 01:40:26.773710 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-13 01:40:26.773721 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-13 01:40:26.775163 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-13 01:40:26.776223 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-13 01:40:26.779382 | orchestrator | + tee -a /opt/tempest/20260413-0140.log 2026-04-13 01:40:29.064083 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-13 01:40:29.064290 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-13 01:40:29.064310 | orchestrator | we strongly recommend against using it for new projects. 2026-04-13 01:40:29.064323 | orchestrator | 2026-04-13 01:40:29.064334 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-13 01:40:29.064345 | orchestrator | framework. For more detail see 2026-04-13 01:40:29.064361 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-13 01:40:29.064388 | orchestrator | 2026-04-13 01:40:29.064411 | orchestrator | __import__(import_str) 2026-04-13 01:40:30.602645 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-13 01:40:30.602753 | orchestrator | Did you mean one of these? 2026-04-13 01:40:30.602772 | orchestrator | help 2026-04-13 01:40:30.602785 | orchestrator | init 2026-04-13 01:40:31.016843 | orchestrator | 2026-04-13 01:40:31.016970 | orchestrator | ## IMAGE (API) 2026-04-13 01:40:31.016994 | orchestrator | 2026-04-13 01:40:31.017013 | orchestrator | + echo 2026-04-13 01:40:31.017031 | orchestrator | + echo '## IMAGE (API)' 2026-04-13 01:40:31.017052 | orchestrator | + echo 2026-04-13 01:40:31.017070 | orchestrator | + _tempest tempest.api.image.v2 2026-04-13 01:40:31.017087 | orchestrator | + local regex=tempest.api.image.v2 2026-04-13 01:40:31.018763 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-13 01:40:31.074283 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-13 01:40:31.097963 | orchestrator | + tee -a /opt/tempest/20260413-0140.log 2026-04-13 01:40:33.193844 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-13 01:40:33.193988 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-13 01:40:33.194853 | orchestrator | we strongly recommend against using it for new projects. 2026-04-13 01:40:33.194884 | orchestrator | 2026-04-13 01:40:33.194897 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-13 01:40:33.194909 | orchestrator | framework. For more detail see 2026-04-13 01:40:33.194922 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-13 01:40:33.194942 | orchestrator | 2026-04-13 01:40:33.194962 | orchestrator | __import__(import_str) 2026-04-13 01:40:34.769585 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-13 01:40:34.769668 | orchestrator | Did you mean one of these? 2026-04-13 01:40:34.769681 | orchestrator | help 2026-04-13 01:40:34.769689 | orchestrator | init 2026-04-13 01:40:35.183825 | orchestrator | 2026-04-13 01:40:35.183905 | orchestrator | ## NETWORK (API) 2026-04-13 01:40:35.183917 | orchestrator | 2026-04-13 01:40:35.183925 | orchestrator | + echo 2026-04-13 01:40:35.183945 | orchestrator | + echo '## NETWORK (API)' 2026-04-13 01:40:35.183961 | orchestrator | + echo 2026-04-13 01:40:35.183968 | orchestrator | + _tempest tempest.api.network 2026-04-13 01:40:35.183976 | orchestrator | + local regex=tempest.api.network 2026-04-13 01:40:35.184273 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-13 01:40:35.185555 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-13 01:40:35.188102 | orchestrator | + tee -a /opt/tempest/20260413-0140.log 2026-04-13 01:40:37.221573 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-13 01:40:37.221683 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-13 01:40:37.221704 | orchestrator | we strongly recommend against using it for new projects. 2026-04-13 01:40:37.221723 | orchestrator | 2026-04-13 01:40:37.221738 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-13 01:40:37.221786 | orchestrator | framework. For more detail see 2026-04-13 01:40:37.221803 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-13 01:40:37.221818 | orchestrator | 2026-04-13 01:40:37.221833 | orchestrator | __import__(import_str) 2026-04-13 01:40:38.795459 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-13 01:40:38.930686 | orchestrator | Did you mean one of these? 2026-04-13 01:40:38.930820 | orchestrator | help 2026-04-13 01:40:38.930846 | orchestrator | init 2026-04-13 01:40:39.217662 | orchestrator | 2026-04-13 01:40:39.217786 | orchestrator | ## VOLUME (API) 2026-04-13 01:40:39.217815 | orchestrator | 2026-04-13 01:40:39.217835 | orchestrator | + echo 2026-04-13 01:40:39.217854 | orchestrator | + echo '## VOLUME (API)' 2026-04-13 01:40:39.217874 | orchestrator | + echo 2026-04-13 01:40:39.217890 | orchestrator | + _tempest tempest.api.volume 2026-04-13 01:40:39.217908 | orchestrator | + local regex=tempest.api.volume 2026-04-13 01:40:39.217943 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-13 01:40:39.220675 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-13 01:40:39.220760 | orchestrator | + tee -a /opt/tempest/20260413-0140.log 2026-04-13 01:40:41.384088 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-13 01:40:41.384227 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-13 01:40:41.384242 | orchestrator | we strongly recommend against using it for new projects. 2026-04-13 01:40:41.384255 | orchestrator | 2026-04-13 01:40:41.384271 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-13 01:40:41.384285 | orchestrator | framework. For more detail see 2026-04-13 01:40:41.384302 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-13 01:40:41.384317 | orchestrator | 2026-04-13 01:40:41.384331 | orchestrator | __import__(import_str) 2026-04-13 01:40:42.986505 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-13 01:40:42.986665 | orchestrator | Did you mean one of these? 2026-04-13 01:40:42.986683 | orchestrator | help 2026-04-13 01:40:42.986695 | orchestrator | init 2026-04-13 01:40:43.395533 | orchestrator | 2026-04-13 01:40:43.395692 | orchestrator | ## COMPUTE (API) 2026-04-13 01:40:43.395713 | orchestrator | 2026-04-13 01:40:43.395764 | orchestrator | + echo 2026-04-13 01:40:43.395780 | orchestrator | + echo '## COMPUTE (API)' 2026-04-13 01:40:43.395795 | orchestrator | + echo 2026-04-13 01:40:43.395804 | orchestrator | + _tempest tempest.api.compute 2026-04-13 01:40:43.395813 | orchestrator | + local regex=tempest.api.compute 2026-04-13 01:40:43.396762 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-13 01:40:43.398644 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-13 01:40:43.401992 | orchestrator | + tee -a /opt/tempest/20260413-0140.log 2026-04-13 01:40:45.638434 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-13 01:40:45.638542 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-13 01:40:45.638558 | orchestrator | we strongly recommend against using it for new projects. 2026-04-13 01:40:45.638571 | orchestrator | 2026-04-13 01:40:45.638583 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-13 01:40:45.638595 | orchestrator | framework. For more detail see 2026-04-13 01:40:45.638606 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-13 01:40:45.638617 | orchestrator | 2026-04-13 01:40:45.638628 | orchestrator | __import__(import_str) 2026-04-13 01:40:47.177377 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-13 01:40:47.177503 | orchestrator | Did you mean one of these? 2026-04-13 01:40:47.177516 | orchestrator | help 2026-04-13 01:40:47.177527 | orchestrator | init 2026-04-13 01:40:47.639455 | orchestrator | + echo 2026-04-13 01:40:47.640620 | orchestrator | 2026-04-13 01:40:47.640656 | orchestrator | ## DNS (API) 2026-04-13 01:40:47.640669 | orchestrator | + echo '## DNS (API)' 2026-04-13 01:40:47.640682 | orchestrator | 2026-04-13 01:40:47.640693 | orchestrator | + echo 2026-04-13 01:40:47.640704 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-13 01:40:47.640716 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-13 01:40:47.641897 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-13 01:40:47.644316 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-13 01:40:47.648853 | orchestrator | + tee -a /opt/tempest/20260413-0140.log 2026-04-13 01:40:49.787446 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-13 01:40:49.787542 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-13 01:40:49.787556 | orchestrator | we strongly recommend against using it for new projects. 2026-04-13 01:40:49.787568 | orchestrator | 2026-04-13 01:40:49.787579 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-13 01:40:49.787589 | orchestrator | framework. For more detail see 2026-04-13 01:40:49.787600 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-13 01:40:49.787609 | orchestrator | 2026-04-13 01:40:49.787619 | orchestrator | __import__(import_str) 2026-04-13 01:40:51.386224 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-13 01:40:51.387310 | orchestrator | Did you mean one of these? 2026-04-13 01:40:51.387402 | orchestrator | help 2026-04-13 01:40:51.387427 | orchestrator | init 2026-04-13 01:40:51.796643 | orchestrator | 2026-04-13 01:40:51.796738 | orchestrator | ## OBJECT-STORE (API) 2026-04-13 01:40:51.796755 | orchestrator | 2026-04-13 01:40:51.796766 | orchestrator | + echo 2026-04-13 01:40:51.796775 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-13 01:40:51.796785 | orchestrator | + echo 2026-04-13 01:40:51.796795 | orchestrator | + _tempest tempest.api.object_storage 2026-04-13 01:40:51.796807 | orchestrator | + local regex=tempest.api.object_storage 2026-04-13 01:40:51.797466 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-13 01:40:51.797808 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-13 01:40:51.800783 | orchestrator | + tee -a /opt/tempest/20260413-0140.log 2026-04-13 01:40:53.807906 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-13 01:40:53.808024 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-13 01:40:53.808048 | orchestrator | we strongly recommend against using it for new projects. 2026-04-13 01:40:53.808064 | orchestrator | 2026-04-13 01:40:53.808074 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-13 01:40:53.808085 | orchestrator | framework. For more detail see 2026-04-13 01:40:53.808095 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-13 01:40:53.808105 | orchestrator | 2026-04-13 01:40:53.808115 | orchestrator | __import__(import_str) 2026-04-13 01:40:55.350516 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-13 01:40:55.350621 | orchestrator | Did you mean one of these? 2026-04-13 01:40:55.350669 | orchestrator | help 2026-04-13 01:40:55.350682 | orchestrator | init 2026-04-13 01:40:55.947181 | orchestrator | ok: Runtime: 0:02:03.243127 2026-04-13 01:40:55.962667 | 2026-04-13 01:40:55.962870 | TASK [Check prometheus alert status] 2026-04-13 01:40:56.496886 | orchestrator | skipping: Conditional result was False 2026-04-13 01:40:56.498632 | 2026-04-13 01:40:56.498729 | PLAY RECAP 2026-04-13 01:40:56.498810 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-13 01:40:56.498860 | 2026-04-13 01:40:56.709710 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-13 01:40:56.712590 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-13 01:40:57.497510 | 2026-04-13 01:40:57.497673 | PLAY [Post output play] 2026-04-13 01:40:57.514380 | 2026-04-13 01:40:57.514536 | LOOP [stage-output : Register sources] 2026-04-13 01:40:57.586273 | 2026-04-13 01:40:57.586644 | TASK [stage-output : Check sudo] 2026-04-13 01:40:58.429215 | orchestrator | sudo: a password is required 2026-04-13 01:40:58.625807 | orchestrator | ok: Runtime: 0:00:00.019455 2026-04-13 01:40:58.640544 | 2026-04-13 01:40:58.640698 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-13 01:40:58.680106 | 2026-04-13 01:40:58.680367 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-13 01:40:58.748461 | orchestrator | ok 2026-04-13 01:40:58.757339 | 2026-04-13 01:40:58.757490 | LOOP [stage-output : Ensure target folders exist] 2026-04-13 01:40:59.215357 | orchestrator | ok: "docs" 2026-04-13 01:40:59.215696 | 2026-04-13 01:40:59.476714 | orchestrator | ok: "artifacts" 2026-04-13 01:40:59.750197 | orchestrator | ok: "logs" 2026-04-13 01:40:59.775383 | 2026-04-13 01:40:59.775591 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-13 01:40:59.816325 | 2026-04-13 01:40:59.816635 | TASK [stage-output : Make all log files readable] 2026-04-13 01:41:00.122220 | orchestrator | ok 2026-04-13 01:41:00.132499 | 2026-04-13 01:41:00.132632 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-13 01:41:00.167289 | orchestrator | skipping: Conditional result was False 2026-04-13 01:41:00.183914 | 2026-04-13 01:41:00.184079 | TASK [stage-output : Discover log files for compression] 2026-04-13 01:41:00.199540 | orchestrator | skipping: Conditional result was False 2026-04-13 01:41:00.213956 | 2026-04-13 01:41:00.214096 | LOOP [stage-output : Archive everything from logs] 2026-04-13 01:41:00.257237 | 2026-04-13 01:41:00.257479 | PLAY [Post cleanup play] 2026-04-13 01:41:00.265866 | 2026-04-13 01:41:00.265977 | TASK [Set cloud fact (Zuul deployment)] 2026-04-13 01:41:00.335462 | orchestrator | ok 2026-04-13 01:41:00.347974 | 2026-04-13 01:41:00.348108 | TASK [Set cloud fact (local deployment)] 2026-04-13 01:41:00.383373 | orchestrator | skipping: Conditional result was False 2026-04-13 01:41:00.400450 | 2026-04-13 01:41:00.400624 | TASK [Clean the cloud environment] 2026-04-13 01:41:01.008924 | orchestrator | 2026-04-13 01:41:01 - clean up servers 2026-04-13 01:41:01.828664 | orchestrator | 2026-04-13 01:41:01 - testbed-manager 2026-04-13 01:41:01.957192 | orchestrator | 2026-04-13 01:41:01 - testbed-node-1 2026-04-13 01:41:02.078432 | orchestrator | 2026-04-13 01:41:02 - testbed-node-4 2026-04-13 01:41:02.205219 | orchestrator | 2026-04-13 01:41:02 - testbed-node-0 2026-04-13 01:41:02.348746 | orchestrator | 2026-04-13 01:41:02 - testbed-node-2 2026-04-13 01:41:02.477013 | orchestrator | 2026-04-13 01:41:02 - testbed-node-5 2026-04-13 01:41:02.610558 | orchestrator | 2026-04-13 01:41:02 - testbed-node-3 2026-04-13 01:41:02.729670 | orchestrator | 2026-04-13 01:41:02 - clean up keypairs 2026-04-13 01:41:02.750404 | orchestrator | 2026-04-13 01:41:02 - testbed 2026-04-13 01:41:02.787625 | orchestrator | 2026-04-13 01:41:02 - wait for servers to be gone 2026-04-13 01:41:11.594201 | orchestrator | 2026-04-13 01:41:11 - clean up ports 2026-04-13 01:41:11.780595 | orchestrator | 2026-04-13 01:41:11 - 070504fb-1f71-41f7-a745-d63b34ce79a9 2026-04-13 01:41:12.077095 | orchestrator | 2026-04-13 01:41:12 - 3a4daab2-8f45-4e44-957d-f564b7058cdf 2026-04-13 01:41:12.336791 | orchestrator | 2026-04-13 01:41:12 - 3e5f264c-ab3e-43e9-82b8-3e26faeb93f3 2026-04-13 01:41:12.570532 | orchestrator | 2026-04-13 01:41:12 - 7b550290-8b16-48d0-8e5f-36c002d3b12e 2026-04-13 01:41:12.793003 | orchestrator | 2026-04-13 01:41:12 - 9b7e3c61-7a2c-4a21-a35d-ccb5a7e52e8d 2026-04-13 01:41:13.011737 | orchestrator | 2026-04-13 01:41:13 - a82a4cc7-9b4e-4ff9-b917-35a656e01585 2026-04-13 01:41:13.228791 | orchestrator | 2026-04-13 01:41:13 - c4e1fd38-8862-4aa3-9ddb-0419ae6d6c4a 2026-04-13 01:41:13.624790 | orchestrator | 2026-04-13 01:41:13 - clean up volumes 2026-04-13 01:41:13.753929 | orchestrator | 2026-04-13 01:41:13 - testbed-volume-0-node-base 2026-04-13 01:41:13.792524 | orchestrator | 2026-04-13 01:41:13 - testbed-volume-1-node-base 2026-04-13 01:41:13.834210 | orchestrator | 2026-04-13 01:41:13 - testbed-volume-5-node-base 2026-04-13 01:41:13.875226 | orchestrator | 2026-04-13 01:41:13 - testbed-volume-4-node-base 2026-04-13 01:41:13.924511 | orchestrator | 2026-04-13 01:41:13 - testbed-volume-2-node-base 2026-04-13 01:41:13.968249 | orchestrator | 2026-04-13 01:41:13 - testbed-volume-3-node-base 2026-04-13 01:41:14.008511 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-manager-base 2026-04-13 01:41:14.049275 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-7-node-4 2026-04-13 01:41:14.092679 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-3-node-3 2026-04-13 01:41:14.136566 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-1-node-4 2026-04-13 01:41:14.176147 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-5-node-5 2026-04-13 01:41:14.213063 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-4-node-4 2026-04-13 01:41:14.255341 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-6-node-3 2026-04-13 01:41:14.298501 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-8-node-5 2026-04-13 01:41:14.336798 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-2-node-5 2026-04-13 01:41:14.376959 | orchestrator | 2026-04-13 01:41:14 - testbed-volume-0-node-3 2026-04-13 01:41:14.418723 | orchestrator | 2026-04-13 01:41:14 - disconnect routers 2026-04-13 01:41:14.528993 | orchestrator | 2026-04-13 01:41:14 - testbed 2026-04-13 01:41:15.522466 | orchestrator | 2026-04-13 01:41:15 - clean up subnets 2026-04-13 01:41:15.578631 | orchestrator | 2026-04-13 01:41:15 - subnet-testbed-management 2026-04-13 01:41:15.730140 | orchestrator | 2026-04-13 01:41:15 - clean up networks 2026-04-13 01:41:16.379415 | orchestrator | 2026-04-13 01:41:16 - net-testbed-management 2026-04-13 01:41:16.680058 | orchestrator | 2026-04-13 01:41:16 - clean up security groups 2026-04-13 01:41:16.719608 | orchestrator | 2026-04-13 01:41:16 - testbed-management 2026-04-13 01:41:16.840797 | orchestrator | 2026-04-13 01:41:16 - testbed-node 2026-04-13 01:41:16.963034 | orchestrator | 2026-04-13 01:41:16 - clean up floating ips 2026-04-13 01:41:16.999190 | orchestrator | 2026-04-13 01:41:16 - 81.163.192.231 2026-04-13 01:41:17.369342 | orchestrator | 2026-04-13 01:41:17 - clean up routers 2026-04-13 01:41:17.485460 | orchestrator | 2026-04-13 01:41:17 - testbed 2026-04-13 01:41:18.466503 | orchestrator | ok: Runtime: 0:00:17.661727 2026-04-13 01:41:18.471460 | 2026-04-13 01:41:18.471630 | PLAY RECAP 2026-04-13 01:41:18.471769 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-13 01:41:18.471841 | 2026-04-13 01:41:18.611798 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-13 01:41:18.614361 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-13 01:41:19.349275 | 2026-04-13 01:41:19.349498 | PLAY [Cleanup play] 2026-04-13 01:41:19.368785 | 2026-04-13 01:41:19.368936 | TASK [Set cloud fact (Zuul deployment)] 2026-04-13 01:41:19.419991 | orchestrator | ok 2026-04-13 01:41:19.426784 | 2026-04-13 01:41:19.426940 | TASK [Set cloud fact (local deployment)] 2026-04-13 01:41:19.461565 | orchestrator | skipping: Conditional result was False 2026-04-13 01:41:19.479036 | 2026-04-13 01:41:19.479202 | TASK [Clean the cloud environment] 2026-04-13 01:41:20.643151 | orchestrator | 2026-04-13 01:41:20 - clean up servers 2026-04-13 01:41:21.130266 | orchestrator | 2026-04-13 01:41:21 - clean up keypairs 2026-04-13 01:41:21.150270 | orchestrator | 2026-04-13 01:41:21 - wait for servers to be gone 2026-04-13 01:41:21.197021 | orchestrator | 2026-04-13 01:41:21 - clean up ports 2026-04-13 01:41:21.284756 | orchestrator | 2026-04-13 01:41:21 - clean up volumes 2026-04-13 01:41:21.365512 | orchestrator | 2026-04-13 01:41:21 - disconnect routers 2026-04-13 01:41:21.394873 | orchestrator | 2026-04-13 01:41:21 - clean up subnets 2026-04-13 01:41:21.416897 | orchestrator | 2026-04-13 01:41:21 - clean up networks 2026-04-13 01:41:21.579907 | orchestrator | 2026-04-13 01:41:21 - clean up security groups 2026-04-13 01:41:21.611018 | orchestrator | 2026-04-13 01:41:21 - clean up floating ips 2026-04-13 01:41:21.636294 | orchestrator | 2026-04-13 01:41:21 - clean up routers 2026-04-13 01:41:22.018257 | orchestrator | ok: Runtime: 0:00:01.389679 2026-04-13 01:41:22.021632 | 2026-04-13 01:41:22.021815 | PLAY RECAP 2026-04-13 01:41:22.021938 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-13 01:41:22.022001 | 2026-04-13 01:41:22.146764 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-13 01:41:22.149627 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-13 01:41:22.915613 | 2026-04-13 01:41:22.915799 | PLAY [Base post-fetch] 2026-04-13 01:41:22.931306 | 2026-04-13 01:41:22.931450 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-13 01:41:22.987414 | orchestrator | skipping: Conditional result was False 2026-04-13 01:41:23.002563 | 2026-04-13 01:41:23.002789 | TASK [fetch-output : Set log path for single node] 2026-04-13 01:41:23.062110 | orchestrator | ok 2026-04-13 01:41:23.071139 | 2026-04-13 01:41:23.071286 | LOOP [fetch-output : Ensure local output dirs] 2026-04-13 01:41:23.583638 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/work/logs" 2026-04-13 01:41:23.866373 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/work/artifacts" 2026-04-13 01:41:24.153338 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ff4365037a774f318c88cb05742d8e11/work/docs" 2026-04-13 01:41:24.172856 | 2026-04-13 01:41:24.172992 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-13 01:41:25.102639 | orchestrator | changed: .d..t...... ./ 2026-04-13 01:41:25.103145 | orchestrator | changed: All items complete 2026-04-13 01:41:25.103222 | 2026-04-13 01:41:25.866585 | orchestrator | changed: .d..t...... ./ 2026-04-13 01:41:26.628069 | orchestrator | changed: .d..t...... ./ 2026-04-13 01:41:26.655090 | 2026-04-13 01:41:26.655226 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-13 01:41:26.691068 | orchestrator | skipping: Conditional result was False 2026-04-13 01:41:26.694402 | orchestrator | skipping: Conditional result was False 2026-04-13 01:41:26.719308 | 2026-04-13 01:41:26.719432 | PLAY RECAP 2026-04-13 01:41:26.719519 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-13 01:41:26.719563 | 2026-04-13 01:41:26.856582 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-13 01:41:26.857647 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-13 01:41:27.606057 | 2026-04-13 01:41:27.606224 | PLAY [Base post] 2026-04-13 01:41:27.621008 | 2026-04-13 01:41:27.621142 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-13 01:41:28.637595 | orchestrator | changed 2026-04-13 01:41:28.648413 | 2026-04-13 01:41:28.648548 | PLAY RECAP 2026-04-13 01:41:28.648623 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-13 01:41:28.648698 | 2026-04-13 01:41:28.767121 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-13 01:41:28.769635 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-13 01:41:29.549075 | 2026-04-13 01:41:29.549241 | PLAY [Base post-logs] 2026-04-13 01:41:29.559779 | 2026-04-13 01:41:29.559911 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-13 01:41:30.042253 | localhost | changed 2026-04-13 01:41:30.052572 | 2026-04-13 01:41:30.052773 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-13 01:41:30.081368 | localhost | ok 2026-04-13 01:41:30.087482 | 2026-04-13 01:41:30.087635 | TASK [Set zuul-log-path fact] 2026-04-13 01:41:30.105961 | localhost | ok 2026-04-13 01:41:30.121090 | 2026-04-13 01:41:30.121257 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-13 01:41:30.159579 | localhost | ok 2026-04-13 01:41:30.165848 | 2026-04-13 01:41:30.166004 | TASK [upload-logs : Create log directories] 2026-04-13 01:41:30.671826 | localhost | changed 2026-04-13 01:41:30.674965 | 2026-04-13 01:41:30.675111 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-13 01:41:31.188001 | localhost -> localhost | ok: Runtime: 0:00:00.008324 2026-04-13 01:41:31.193265 | 2026-04-13 01:41:31.193414 | TASK [upload-logs : Upload logs to log server] 2026-04-13 01:41:31.795227 | localhost | Output suppressed because no_log was given 2026-04-13 01:41:31.797965 | 2026-04-13 01:41:31.798118 | LOOP [upload-logs : Compress console log and json output] 2026-04-13 01:41:31.855112 | localhost | skipping: Conditional result was False 2026-04-13 01:41:31.860906 | localhost | skipping: Conditional result was False 2026-04-13 01:41:31.873061 | 2026-04-13 01:41:31.873317 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-13 01:41:31.921957 | localhost | skipping: Conditional result was False 2026-04-13 01:41:31.922568 | 2026-04-13 01:41:31.926216 | localhost | skipping: Conditional result was False 2026-04-13 01:41:31.940869 | 2026-04-13 01:41:31.941103 | LOOP [upload-logs : Upload console log and json output]